The behavior of non-local thermal equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser- produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, and X-ray sources. In steady-state cases the proper description of these plasmas may require the solution of thousands of linear rate equations. A possible simplification for this numerical task lies in some form of statistical averaging, such as the averaging over configurations or super-configurations. However, to assess the validity of such an averaging procedure and to handle cases where isolated lines play an important role, it will be necessary to treat detailed levels systems. This involves matrices with potentially billions of elements, which are rather sparse but still involve thousands of diagonals above and below the main one. We propose here a numerical algorithm based on the LU decomposition for such linear systems. It will be shown that this method is orders of magnitude faster than the traditional Gauss elimination. Moreover, it is found that there are no convergence or accuracy issues, which are found when using methods based on conjugate gradients or minimization. Among cases treated at the last NLTE-kinetics-code meeting, krypton and tungsten plasmas are considered. Furthermore, to assess the validity of configuration averaging, several criteria are discussed. While a criterion based on detailed balance is relevant in cases not too far from LIE, it is found to be insufficient in general. An alternate criterion based on the inspection of the influence of an arbitrary configuration temperature is proposed and tested successfully. (authors)
Poirier, M.; de Gaufridy de Dortan, F.
2009-12-01
The behavior of non-local thermal equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser- produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, and X-ray sources. In steady-state cases the proper description of these plasmas may require the solution of thousands of linear rate equations. A possible simplification for this numerical task lies in some form of statistical averaging, such as the averaging over configurations or superconfigurations. However, to assess the validity of such an averaging procedure and to handle cases where isolated lines play an important role, it will be necessary to treat detailed levels systems. This involves matrices with potentially billions of elements, which are rather sparse but still involve thousands of diagonals above and below the main one. We propose here a numerical algorithm based on the LU decomposition for such linear systems. It will be shown that this method is orders of magnitude faster than the traditional Gauss elimination. Moreover, it is found that there are no convergence or accuracy issues, which are found when using methods based on conjugate gradients or minimization. Among cases treated at the last NLTE-kinetics-code meeting, krypton and tungsten plasmas are considered. Furthermore, to assess the validity of configuration averaging, several criteria are discussed. While a criterion based on detailed balance is relevant in cases not too far from LTE, it is found to be insufficient in general. An alternate criterion based on the inspection of the influence of an arbitrary configuration temperature is proposed and tested successfully.
Poirier, M
2009-01-01
The behavior of non-local thermal-equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser-produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, or X-ray sources. The proper description of these media in stationary cases requires to solve linear systems of thousands or more rate equations. A possible simplification for this arduous numerical task may lie in some type of statistical average, such as configuration or superconfiguration average. However to assess the validity of this procedure and to handle cases where isolated lines play an important role, it may be important to deal with detailed levels systems. This involves matrices with sometimes billions of elements, which are rather sparse but still involve thousands of diagonals. We propose here a numerical algorithm based on the LU decomposition for such linear systems. This method turns out to be orders of magnitude faster than the traditional Gauss elimination. And at variance with ...
González Bonorino, G.; Osterkamp, W. R.; Colombo Piñol, Ferrán
2002-01-01
Disturbed lands in mountain watersheds may be a significant source of sediment. A systematic rating of their potential for erosion would be useful in soil conservation planning. RUSLE is a successful erosion-prediction technique, well tested on gentle slopes of agricultural lands. In view of its success, attempts have been made to apply RUSLE to areas of complex topography by substituting upstream contributing area for the linear-flow model embodied in the RUSLE L-factor. This substitution le...
Averaging procedure in variable-G cosmologies
Cardone, Vincenzo F
2008-01-01
Previous work in the literature had built a formalism for spatially averaged equations for the scale factor, giving rise to an averaged Raychaudhuri equation and averaged Hamiltonian constraint, which involve a backreaction source term. The present paper extends these equations to include models with variable Newton parameter and variable cosmological term, motivated by the non-perturbative renormalization program for quantum gravity based upon the Einstein--Hilbert action. The coupling between backreaction and spatially averaged three-dimensional scalar curvature is found to survive, and all equations involving contributions of a variable Newton parameter are worked out in detail. Interestingly, under suitable assumptions, an approximate solution can be found where the universe tends to a FLRW model, while keeping track of the original inhomogeneities through two effective fluids.
Ensemble averaging applied to the flow of a multiphase mixture
Ensemble averaging theorems are used to derive a two-fluid model describing the flow of a dilute fluid-solid mixture. The model is valid for mixtures containing particles that are small compared to the length scales describing variations in ensemble-averaged field quantities, such as fluid or particle phase density, pressure or velocity. For the case where the mixture is pseudo-homogeneous, the equations obtained reproduce the Einstein viscosity correction
C. O'Brien
2007-01-01
Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.
Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.
2007-12-01
We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].
Heneghan C
2007-01-01
Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as -means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of and specificity of .
To take advantages of the semi-implicit computer models - to solve the two phase flow differential system - a proper averaging procedure is also needed for the source terms. In fact, in some cases, the correlations normally used for the source terms - not time averaged - fail using the theoretical time step that arises from the linear stability analysis used on the right handside. Such a time averaging procedure is developed with reference to the bubbly flow regime. Moreover, the concept of mass that must be exchanged to reach equilibrium from a non-equilibrium state is introduced to limit the mass transfer during a time step. Finally some practical calculations are performed to compare the different correlations for the average mass transfer rate developed in this work. (orig.)
Effects of measurement procedure and equipment on average room acoustic measurements
Gade, Anders Christian; Bradley, J S; Siebein, G W
1993-01-01
. In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences......This paper reports the results of a measurement tour of nine U.S. concert halls. Three measurements teams, from the University of Florida, the National Research Council of Canada, and the Technical University of Denmark, made parallel sets of measurements using their own equipment and procedures...
Nevada Applied Ecology Group procedures handbook for environmental transuranics
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and other biological material. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications more complex than the routine standard sampling procedures utilized
Nevada Applied Ecology Group procedures handbook for environmental transuranics
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and others. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure, it has been included to indicate special studies or applications perhaps more complex than the routine standard sampling procedures utilized
Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real
On the peeling procedure applied to a Poisson point process
Davydov, Y.; Nagaev, A.; Philippe, A
2010-01-01
In this paper we focus on the asymptotic properties of the sequence of convex hulls which arise as a result of a peeling procedure applied to the convex hull generated by a Poisson point process. Processes of the considered type are tightly connected with empirical point processes and stable random vectors. Results are given about the limit shape of the convex hulls in the case of a discrete spectral measure. We give some numerical experiments to illustrate the peeling proce...
A loudness calculation procedure applied to shaped sonic booms
Shepherd, Kevin P.; Sullivan, Brenda M.
1991-01-01
Described here is a procedure that can be used to calculate the loudness of sonic booms. The procedure is applied to a wide range of sonic booms, both classical N-waves and a variety of other shapes of booms. The loudness of N-waves is controlled by overpressure and the associated rise time. The loudness of shaped booms is highly dependent on the characteristics of the initial shock. A comparison of the calculated loudness values indicates that shaped booms may have significantly reduced loudness relative to N-waves having the same peak overpressure. This result implies that a supersonic transport designed to yield minimized sonic booms may be substantially more acceptable than an unconstrained design.
The method of averaging applied to pharmacokinetic/pharmacodynamic indirect response models.
Dunne, Adrian; de Winter, Willem; Hsu, Chyi-Hung; Mariam, Shiferaw; Neyens, Martine; Pinheiro, José; Woot de Trixhe, Xavier
2015-08-01
The computational effort required to fit the pharmacodynamic (PD) part of a pharmacokinetic/pharmacodynamic (PK/PD) model can be considerable if the differential equations describing the model are solved numerically. This burden can be greatly reduced by applying the method of averaging (MAv) in the appropriate circumstances. The MAv gives an approximate solution, which is expected to be a good approximation when the PK profile is periodic (i.e. repeats its values in regular intervals) and the rate of change of the PD response is such that it is approximately constant over a single period of the PK profile. This paper explains the basis of the MAv by means of a simple mathematical derivation. The NONMEM® implementation of the MAv using the abbreviated FORTRAN function FUNCA is described and explained. The application of the MAv is illustrated by means of an example involving changes in glycated hemoglobin (HbA1c%) following administration of canagliflozin, a selective sodium glucose co-transporter 2 inhibitor. The PK/PD model applied to these data is fitted with NONMEM® using both the MAv and the standard method using a numerical differential equation solver (NDES). Both methods give virtually identical results but the NDES method takes almost 8 h to run both the estimation and covariance steps, whilst the MAv produces the same results in less than 30 s. An outline of the NONMEM® control stream and the FORTRAN code for the FUNCA function is provided in the appendices. PMID:26142076
One dimensional averaged (K-ε) turbulence model applied to channel, pipe and rod bundle flows
Full text of publication follows: In order to calculate accurately the flows that can occur in the wide and complex geometry of nuclear reactor, using reasonable means that do not calculate every part of the whole circuit with 3D thin meshes, one possible way is to use simultaneously several numerical tools dedicated to the different parts of the circuit. As an example, the lower plenum could be calculated with 3D CFD tool while rod bundle part could be calculated with porous 3D code and pipes could be calculated with one dimensional code. This approach requires the coupling and the consistency of all these tools. As part of this project, work is in progress to develop a one dimensional averaged (k-ε) turbulence model that is coherent with the standard 3D (k-ε) model. This consistency is required to consider the one-way or two-way coupling between 1D and 3D calculations. The development of a turbulence model suitable for both pipe and charged medium such as rod bundles requires a porous media formulation for the equations governing the turbulent flow. The volume-averaging theory (VAT) is thus applied to the standard (k-ε) turbulence model equations. It gives a new system of equations for the transport of the volumetric averaged turbulence kinetic energy and its dissipation rate. Those new equations are different from the original ones; they involve additional terms which come from the averaging process for porous model and that correspond to internal production/dissipation mechanisms. Currently, there is no general well-developed closure expression for these additional terms, even if Nakayama et al. proposed a closure method for the one appearing in the averaged (k-ε) transport equations in the case of regular morphologies. These unknown terms due to the porous matrix are modeled collectively by these authors in order to obtain correlations adapted to classical porous media. Following these ideas, we propose here a closure expression adapted to channel, pipe
Applying computer-based procedures in nuclear power plants
Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)
Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks
Shen-Chun Wu
2003-01-01
Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.
A unified framework for benchmark dose estimation applied to mixed models and model averaging
Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.
2013-01-01
This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling
Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
A bidirectional coupling procedure applied to multiscale respiratory modeling
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
State-averaged Monte Carlo configuration interaction applied to electronically excited states
Coe, J P
2014-01-01
We introduce state-averaging into the method of Monte Carlo configuration interaction (SA-MCCI) to allow the stable and efficient calculation of excited states. We show that excited potential curves for H$_{3}$, including a crossing with the ground state, can be accurately reproduced using a small fraction of the FCI space. A recently introduced error measure for potential curves [J. P. Coe and M. J. Paterson, J. Chem. Phys., 137, 204108 (2012)] is shown to also be a fair approach when considering potential curves for multiple states. We demonstrate that potential curves for LiF using SA-MCCI agree well with the FCI results and the avoided crossing occurs correctly. The seam of conical intersections for CH$_{2}$ found by Yarkony [J. Chem. Phys., 104, 2932 (1996)] is used as a test for SA-MCCI and we compare potential curves from SA-MCCI with FCI results for this system for the first three triplet states. We then demonstrate the improvement from using SA-MCCI on the dipole of the $2$ $^{1}A_{1}$ state of carbo...
34 CFR 370.43 - What requirement applies to the use of mediation procedures?
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures...
CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE
Aleksandra Catić-Đorđević
2013-09-01
Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample
Backus averaging is often applied to well logs in order to estimate the elastic properties of a stack of thin layers at a longer wavelength to be used in seismic data analysis. Schoenberg and Muir extended Backus's method to layers with arbitrary anisotropy, but simple analytical equations are available for isotropic and transversely isotropic layers only. There is growing exploration interest in shale reservoirs, which exhibit lower symmetry seismic anisotropy if the reservoir is fractured. Here, simple analytical equations for Backus averaging are presented for symmetries up to monoclinic, of which averaging equations for orthorhombic and monoclinic layers are new. After Backus averaging, a stack of isotropic layers results in a vertical transverse isotropy (VTI), a stack of VTI layers results in a VTI, a stack of horizontal transverse isotropy layers results in an orthorhombic, a stack of orthorhombic layers results in an orthorhombic and a stack of monoclinic layers results in a monoclinic. Workflows for Backus averaging are presented along with numerical examples using well-log data from the Haynesville shale play in East Texas, USA. (paper)
Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process
Motley, Albert E., III
2000-01-01
One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.
21 CFR 1315.22 - Procedure for applying for individual manufacturing quotas.
2010-04-01
... manufacturing quotas. 1315.22 Section 1315.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF... Individual Manufacturing Quotas § 1315.22 Procedure for applying for individual manufacturing quotas. Any... desires to manufacture a quantity of the chemical must apply on DEA Form 189 for a manufacturing quota...
A Flexible Boundary Procedure for Hyperbolic Problems: Multiple Penalty Terms Applied in a Domain
Nordström, Jan; Abbas, Qaisar; Erickson, Brittany A.; Frenander, Hannes
2014-01-01
A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied near boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new boundary...
Nordström, Jan; Abbas, Qaisar; A. Erickson, Brittany; Frenander, Hannes
2013-01-01
A new weak boundary procedure for hyperbolic problems is presented. We consider high order finite difference operators of summation-by-parts form with weak boundary conditions and generalize that technique. The new boundary procedure is applied at far field boundaries in an extended domain where data is known. We show how to raise the order of accuracy of the scheme, how to modify the spectrum of the resulting operator and how to construct non-reflecting properties at the boundaries. The new ...
Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT
In this document the quality control procedures applied to the CMS muon drift chambers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chamber test handbook for beginners. (Author) 3 refs
Zoppetti, N.; Andreuccetti, D.
2009-08-01
Although the calculation of the surface average of the low-frequency current density distribution over a cross-section of 1 cm2 is required by ICNIRP guidelines, no reference averaging algorithm is indicated, neither in the ICNIRP guidelines nor in the Directive 2004/40/EC that is based on them. The lack of a general standard algorithm that fulfils the ICNIRP guidelines' requirements is particularly critical in the prospective of the 2004/40/EC Directive endorsement, since the compliance to normative limits refers to well-defined procedures. In this paper, two case studies are considered, in which the calculation of the surface average is performed using a simplified approach widely used in the literature and an original averaging procedure. This analysis, aimed at quantifying the expected differences and to single out their sources, shows that the choice of the averaging algorithm represents an important source of uncertainty in the application of the guideline requirements.
Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom
Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy
2016-01-01
The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…
21 CFR 1303.22 - Procedure for applying for individual manufacturing quotas.
2010-04-01
... manufacturing quotas. 1303.22 Section 1303.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE QUOTAS Individual Manufacturing Quotas § 1303.22 Procedure for applying for individual manufacturing quotas. Any person who is registered to manufacture any basic class of controlled substance...
Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Radiation Dosimetry Department, Ministry of Scientific Research, Tersa Street, P.O. Box 136, Giza, Haram 12211 (Egypt); Physics Department, Faculty of Science and Humanities, Salman Bin AbdulAziz University, Alkharj (Saudi Arabia)
2014-02-11
Alanine/EPR is the most common dosimetry system for high radiation doses because of its high stability and wide linear response, however, use of alanine in most of medical applications still require special sophisticated methodologies and techniques in order to extend alanine detection limit to low levels of radiation doses. One of these techniques is the use of digital processing of acquired alanine spectra for enhancing useful components in spectra while useless features are suppressed. Simple moving average filter (MA) impacts on alanine EPR spectra have been studied in terms of peak-to-peak height, peak-to-peak line width, and associated uncertainty. Three types of the used filter were investigated: upward MA, central MA, and downward MA filters, effects of each on the peak position for different values of filter width were studied. It was found that MA filter always lead to the reduction in signal intensity and the increase of line width of the central peak of alanine spectrum. Peak position also changes in cases of the upward MA and downward MA filters while no significant changes were observed in the case of central MA. Uncertainties associated to the averaging process were evaluated and plotted versus the filter width resulting in a linear relationship. Filter width value should be carefully selected in order to avoid probable distortion in processed spectra while gaining less noisy spectra with less associated uncertainties.
A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics
Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;
2015-01-01
structure evaluation by assessing the local identifiability characteristics of the parameters. Moreover, such a procedure should be generic to make sure it can be applied independent from the structure of the model. We hereby apply a numerical identifiability approach which is based on the work of Walter...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring....... In contrast, the practical identifiability analysis revealed that high values of the forward rate parameter Vf led to identifiability problems. These problems were even more pronounced athigher substrate concentrations, which illustrates the importance of a proper experimental designto avoid...
Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries
Kang, Dong Gu; Ahn, Seung-Hoon; Cho, Dae-Hyung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2015-05-15
This is termed station blackout (SBO). However, it does not generally include the loss of available AC power to safety buses fed by station batteries through inverters or by alternate AC sources. Historically, risk analysis results have indicated that SBO was a significant contributor to overall core damage frequency. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident, which is a typical beyond design basis accident and important contributor to overall plant risk, is performed by applying the combined deterministic and probabilistic procedure (CDPP). In addition, discussions are made for reevaluation of SBO risk at OPR-1000 by eliminating excessive conservatism in existing PSA. The safety assessment of OPR-1000 for SBO accident, which is a typical BDBA and significant contributor to overall plant risk, was performed by applying the combined deterministic and probabilistic procedure. However, the reference analysis showed that the CDF and CCDP did not meet the acceptable risk, and it was confirmed that the SBO risk should be reevaluated. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it was demonstrated that the proposed CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.
Validation procedures of software applied in nuclear instruments. Proceedings of a technical meeting
The IAEA has supported the availability of well functioning nuclear instruments in Member States over more than three decades. Some older or aged instruments are still being used and are still in good working condition. However, those instruments may not meet modern software requirements for the end-user in all cases. Therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. New advanced software is not only applied in case of new instrumentation, but often also for new and improved applications of modernized and/or refurbished instruments in many Member States for which in few cases the IAEA also provided support. Modern software applied in nuclear instrumentation plays a key role for their safe operation and execution of commands in a user friendly manner. Correct data handling and transfer has to be ensured. Additional features such as data visualization, interfacing to PC for control and data storage are often included. To finalize the task, where new instrumentation which is not commercially available is used, or aged instruments are modernized/refurbished, the applied software has to be verified and validated. A Technical Meeting on 'Validation Procedures of Software Applied in Nuclear Instruments' was organized in Vienna, 20-23 November 2006, to discuss the verification and validation process of software applied to operation and use of nuclear instruments. The presentations at the technical meeting included valuable information, which has been compiled and summarized in this publication, which should be useful for technical staff in Member States when modernizing/refurbishing nuclear instruments. 22 experts in the field of modernization/refurbishment of nuclear instruments as well as users of applied software presented their latest results. Discussion sessions followed the presentations. This publication is the outcome of deliberations during the meeting
A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data. - Highlights: • The discrete energy-averaged model for Galfenol is reformulated. • An analytical solution for 3D magnetostriction and magnetization is developed from eigenvalue decomposition. • Improved robustness is achieved. • An efficient optimization routine is developed to identify parameters from averaged hysteresis curves. • The effectiveness of the model is demonstrated against experimental data
Quality control procedures applied to nuclear instruments. Proceedings of a technical meeting
Quality Control (QC), test procedures for Nuclear Instrumentation are important for assurance of proper and safe operation of the instruments, especially with regard to equipment related to radiological safety, human health and national safety. Correct measurements of radiation parameters must be ensured, i.e., accurate measurement of the number of radioactive events, counting times and in some cases accurate measurements of the radiation energy and occurring time of the nuclear events. There are several kinds of testing on nuclear instruments, for example, type-testing done by suppliers, acceptance testing made by the end users, Quality Control tests after repair and Quality Assurance/Quality Controls tests made by end-users. All of these tests are based in many cases on practical guidelines or on the experience of the own specialist, the available standards on this topic also need to be adapted to specific instruments. The IAEA has provided nuclear instruments and supported the operational maintenance efforts of the Member States. Although Nuclear Instrumentation is continuously upgraded, some older or aged instruments are still in use and in good working condition. Some of these instruments may not, however, meet modern requirements for the end-user therefore, Member States, mostly those with emerging economies, modernize/refurbish such instruments to meet the end-user demands. As a result, new instrumentation which is not commercially available, or modernized/refurbished instruments, need to be tested or verified with QC procedures to meet national or international certification requirements. A technical meeting on QC procedures applied to nuclear instruments was organized in Vienna from 23 to 24 August 2007. Existing and required QC test procedures necessary for the verification of operation and measurement of the main characteristics of nuclear instruments was the focus of discussion at this meeting. Presentations made at the technical meeting provided
Evaluation of the BCR sequential extraction procedure applied for two unpolluted Spanish soils
The procedure of BCR sequential extraction has been applied to five samples from two unpolluted soils in southern Spain. Total concentrations of different elements have been calculated as the sum of the three fractions of BCR and the residue has been measured for each. Also, a total analysis based on INAA or total-digestion techniques has been performed for the same samples. BCR and total analysis closely agreed for As, Pb and Cd. For Cu, Co, Cr and Zn the comparison of the results did not provide definitive conclusions concerning the capability of BCR in measuring total concentrations. On the other hand, in these cases, a certain correlation was found between the concentrations measured and some soil characteristics, especially the clay, organic-matter and CaCO3 contents. BCR proved incapable of providing accurate measurements for Ni
A diagnostic procedure for applying the social-ecological systems framework in diverse cases
Jochen Hinkel
2015-03-01
Full Text Available The framework for analyzing sustainability of social-ecological systems (SES framework of Elinor Ostrom is a multitier collection of concepts and variables that have proven to be relevant for understanding outcomes in diverse SES. The first tier of this framework includes the concepts resource system (RS and resource units (RU, which are then further characterized through lower tier variables such as clarity of system boundaries and mobility. The long-term goal of framework development is to derive conclusions about which combinations of variables explain outcomes across diverse types of SES. This will only be possible if the concepts and variables of the framework can be made operational unambiguously for the different types of SES, which, however, remains a challenge. Reasons for this are that case studies examine other types of RS than those for which the framework has been developed or consider RS for which different actors obtain different kinds of RU. We explore these difficulties and relate them to antecedent work on common-pool resources and public goods. We propose a diagnostic procedure which resolves some of these difficulties by establishing a sequence of questions that facilitate the step-wise and unambiguous application of the SES framework to a given case. The questions relate to the actors benefiting from the SES, the collective goods involved in the generation of those benefits, and the action situations in which the collective goods are provided and appropriated. We illustrate the diagnostic procedure for four case studies in the context of irrigated agriculture in New Mexico, common property meadows in the Swiss Alps, recreational fishery in Germany, and energy regions in Austria. We conclude that the current SES framework has limitations when applied to complex, multiuse SES, because it does not sufficiently capture the actor interdependencies introduced through RS and RU characteristics and dynamics.
Plant procedures are instructions to guide operators in monitoring, decision making, and controlling nuclear power plants(NPPs). While plant procedures historically have been paper-based, computerized-based procedures are being developed to overcome the drawbacks of paper-based procedures. The CPS is used to integrate the operational instructions, plan process information and computerized operator support functions in advanced main control room (MCR) of APR1400. The logic functions applied to CPS are three categories: Procedure Entry Condition (PEC), Continuously Applied Step (CAS) re-execution condition, instruction evaluation. This automation using logic function can reduce operator 2019's burden about monitoring, decision making, controlling NPPs but this automation can also lower situation awareness and operator in the loop. These automation issues caused by logic function shall be closed to apply computerized procedure. This paper introduces issues drawn by system designer, plant operators, and human factors engineers. To close these introduced issues, this paper suggests solutions from system design, procedure writer's guideline, and operator training. This paper introduces issues drawn by system designer, plant operators, and human factors engineers. To close these introduced issues, this paper suggests solutions from system design, procedure writer's guideline, and operator training
ANGLE software for gamma-spectrometry (semiconductor detector gamma-efficiency calculations, in particular) in its various forms has been in use for 15 years in numerous gamma-spectrometry laboratories all around. ANGLE is basically a semi-empirical model for efficiency calculations, which combines the advantages of both absolute and relative approach, while attempting to minimize their drawbacks. The physical model behind is the concept of the effective solid angle, which is calculated upon the input data on the geometrical and physical characteristics of (1) the source (including the container vessel), (2) the detector and (3) the counting arrangement (including intercepting layers between the latter two). It was shown earlier that only the simultaneous differential treatment of gamma-attenuation, geometry and detector response, as in ANGLE, is essentially justified for this type of calculations. The program can be applied to practically all counting situations encountered in laboratory practice: point, disc, cylindrical or Marinelli sources and any matrix composition. No standards are required, but a so called 'reference efficiency curve' should be obtained ('once for ever') by measuring a set of calibrated point sources. As a summary, ANGLE is characterized by (1) a very broad application range, (2) satisfactory accuracy (of the order of a few percent), (3) easy data manipulation (under WINDOWS), (4) short computation times, (5) flexibility in respect with input parameters and (6) suitability for didactical purposes. Possibilities for applying ANGLE in isotope hydrology analytical procedures (e.g. with radioactive tracers involved), are discussed. Tracer techniques are of particular interest for water resources management in Mediterranean karstic regions, typically abundant with precipitation in winter, but scarce with ground waters in summer - like is the case with east coast of Adriatic, including the coastal part of Montenegro. (author)
This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)
2010-04-01
... Governor's imposition of sanctions for substantial violations or performance failures by a local area? 667... State Appeals Processes § 667.650 What procedures apply to the appeals of the Governor's imposition of... been found in substantial violation of WIA title I, and has received notice from the Governor...
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Procedures of creep-fatigue life evaluation applied to inelastic design analysis
The application of the inelastic analysis to the structural design is one of the great concerns to the breeder reactor development. From such situation, the investigations on the inelastic constitutive equations have been activated, and various kind of new constitutive models have been proposed. Although ASME Code Case N-47 provides general guidelines for damage evaluation, they do not seem necessarily adequate on the following points. (i) Underestimation of fatigue damage under non-proportional loading. (ii) Overestimation of creep damage for compressive stress. In order to improve the former problem, the authors propose a procedure for uniaxialization of the history of multi-axial strain components. The latter problem can be improved by using the new creep rupture criterion proposed by ORNL. These procedures were coded as the post-processor and linked to MARC program with the new constitutive model described above. This paper introduces the features of the procedures. (orig.)
Calculation of the information content of retrieval procedures applied to mass spectral data bases
Marlen, G. van; Dijkstra, Auke; Klooster, H.A. van 't
1979-01-01
A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity o
The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)
Sequential procedure for the design of checklists applied to the patient safety
Pardal-Refoyo JL
2014-07-01
Full Text Available Introduction: Checklists are cognitive mnemonic aid to guide in performing complex tasks under stress or fatigue, reduce errors of omission and identify critical incidents function. There is a lack of specific methodological aid for their processing. Objective: The aim of the study was to design a structured development of checklists applied to patient safety process (PS. Material and methods: Systematic review. Ten papers were selected, five related to the structure of the checklists, three related to PS research methods (root cause analysis -RCA- and failure mode and effects analysis -FMEA-, one related to construction indicators and one with consensus methods. Results: A sequential process in 15 steps was designed to help the development of LV applied to the SP collecting elements proposed in the literature reviewed. Conclusions: The development of LV SP applied to a particular process should follow a sequential model which includes the literature review, the ACR and FMEA methods and consensus.
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Hengameh, Habibi; Afsaneh, Raiesifar; Morteza, Khaghanizade; Hosein, Mahmudi; Marjan, Seyed Mazhari; Ebadi, Abbas
2015-01-01
Background: Remarkable advances in educational measurement have proved need to the implementation of modern and appropriate methods of clinical evaluation. This study was carried out to compare the effect of applying direct observation procedural skills and routine evaluation method on clinical skills of nursing students. Methods: This randomized clinical trial was conducted on students of Nursing Army College, Tehran, Iran. After obtaining approval from the Ethics Committee of the Baqiyatall...
A computerized method for applying gradient pulses in NMR imaging is proposed, especially for medical diagnosis, although application to industrial nondestructive testing is feasible. The method overcomes drawbacks associated with the difficulties involved in programming gradient reels, which often preclude the use of oblique images to the detriment of the quality of diagnosis
Spatial Data Quality Control Procedure applied to the Okavango Basin Information System
Butchart-Kuhlmann, Daniel
2014-05-01
Spatial data is a powerful form of information, capable of providing information of great interest and tremendous use to a variety of users. However, much like other data representing the 'real world', precision and accuracy must be high for the results of data analysis to be deemed reliable and thus applicable to real world projects and undertakings. The spatial data quality control (QC) procedure presented here was developed as the topic of a Master's thesis, in the sphere of and using data from the Okavango Basin Information System (OBIS), itself a part of The Future Okavango (TFO) project. The aim of the QC procedure was to form the basis of a method through which to determine the quality of spatial data relevant for application to hydrological, solute, and erosion transport modelling using the Jena Adaptable Modelling System (JAMS). As such, the quality of all data present in OBIS classified under the topics of elevation, geoscientific information, or inland waters, was evaluated. Since the initial data quality has been evaluated, efforts are underway to correct the errors found, thus improving the quality of the dataset.
PhysioSoft – An Approach in Applying Computer Technology in Biofeedback Procedures
Havelka, Mladen; HAVELKA, Juraj; Delimar, Marko
2009-01-01
The paper presents description of original biofeedback computer program called PhysioSoft. It has been designed on the basis of the experience in development of biofeedback techniques of interdisciplinary team of experts of the Department of Health Psychology of the University of Applied Health Studies, Faculty of Electrical Engineering and Computing, University of Zagreb, and »Mens Sana«, Private Biofeedback Practice in Zagreb. The interest in the possibility of producing direct and voluntar...
Current LC-MS methods and procedures applied to the identification of new steroid metabolites.
Marcos, Josep; Pozo, Oscar J
2016-09-01
The study of the metabolism of steroids has a long history; from the first characterizations of the major metabolites of steroidal hormones in the pre-chromatographic era, to the latest discoveries of new forms of excretions. The introduction of mass spectrometers coupled to gas chromatography at the end of the 1960's represented a major breakthrough for the elucidation of new metabolites. In the last two decades, this technique is being complemented by the use of liquid chromatography-mass spectrometry (LC-MS). In addition of becoming fundamental in clinical steroid determinations due to its excellent specificity, throughput and sensitivity, LC-MS has emerged as an exceptional tool for the discovery of new steroid metabolites. The aim of the present review is to provide an overview of the current LC-MS procedures used in the quest of novel metabolic products of steroidal hormones and exogenous steroids. Several aspects regarding LC separations are first outlined, followed by a description of the key processes that take place in the mass spectrometric analysis, i.e. the ionization of the steroids in the source and the fragmentation of the selected precursor ions in the collision cell. The different analyzers and approaches employed together with representative examples of each of them are described. Special emphasis is placed on triple quadrupole analyzers (LC-MS/MS), since they are the most commonly employed. Examples on the use of precursor ion scan, neutral loss scan and theoretical selected reaction monitoring strategies are also explained. PMID:26709140
Claudia Barroso Krause
2012-06-01
Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.
Implementation of procedures for kilovoltage evaluation applied to dental X ray system
In this work measurements were done in order to evaluate the accuracy and the precision of the voltage applied to a X rays tube, as well as its variation with distance. A dental X ray system with nominal voltage of 70 kV was used, and a portable kV digital measurer calibrated by the IEE/USP was also utilized. The kV obtained results presented a variation of 9.7% in accuracy and 1.6% in the precision. The results obtained for the distance variation showed only 0.6% of deviation, considering the kVp values obtained. The results are in accordance with the minimum values recommended by Portaria Federal 453 from the Ministerio da Saude. (author)
Highlights: ► Polymeric scaffolds, made from chitosan-based films fixed by chemical (citrate) or natural method (genipin), were developed. ► Nano-indentation with a constant harmonic frequency was applied on porous scaffolds to explore their surface mechanics. ► The relationship between surface mechanical property and cell-surface interactions of scaffold materials was demonstrated. ► Porous scaffolds cross-linked by genipin showed adequate cell affinity, non-toxicity, and suitable mechanical properties. - Abstract: Porous chitosan scaffold is used for tissue engineering and drug delivery, but is limited as a scaffold material due to its mechanical weakness, which restrains cell adhesion on the surface. In this study, a chemical reagent (citrate) and a natural reagent (genipin) are used as cross-linkers for the formation of chitosan-based films. Nanoindentation technique with a continuous stiffness measurement system is particularly applied on the porous scaffold surface to examine the characteristic modulus and nanohardness of a porous scaffold surface. The characteristic modulus of a genipin-cross-linked chitosan surface is ≈2.325 GPa, which is significantly higher than that of an uncross-linked one (≈1.292 GPa). The cell-scaffold surface interaction is assessed. The cell morphology and results of an MTS assay of 3T3-fibroblast cells of a genipin-cross-linked chitosan surface indicate that the enhancement of mechanical properties induced cell adhesion and proliferation on the modified porous scaffold surface. The pore size and mechanical properties of porous chitosan film can be tuned for specific applications such as tissue regeneration.
George Chacko; Sanjiv Ranjan Das
1997-01-01
We develop analytic pricing models for options on averages by means of a state-space expansion method. These models augment the class of Asian options to markets where the underlying traded variable follows a mean-reverting process. The approach builds from the digital Asian option on the average and enables pricing of standard Asian calls and puts, caps and floors, as well as other exotica. The models may be used (i) to hedge long period interest rate risk cheaply, (ii) to hedge event risk (...
The International Basic Safety Standards for Protection against Ionizing Radiation and for the Safety of Radiation Sources (BSS) cover the application of ionizing radiation for all practices and interventions and are, therefore, basic and general in nature. Users of radiation sources have to apply those basic requirements to their own particular practices. That requires a degree of 'interpretation' by the user, which can result in varying levels of regulatory compliance and inconsistencies between applications of the BSS to similar practices. In this context, the Preamble of the BSS states that: 'The [regulatory body] may need to provide guidance on how certain regulatory requirements are to be fulfilled for various practices, for example in regulatory guideline documents.' In order to guide the user to achieve a good standard of protection and to achieve a consistent national approach to licensing and inspection, some countries have developed practice specific regulatory guidance, while others have practice specific regulations. National regulatory guidance is tailored to a country's own legislation and regulations for obvious reasons. This can lead to problems if the guidance is used in other States without appropriate modification to take local requirements into account. There would appear, therefore, to be scope for producing internationally harmonized guidance, while bearing in mind that the ultimate responsibility for the regulatory documents rests with the State. Some regions have taken the initiative of preparing guidance to facilitate the regional harmonization of regulatory control of certain common practices (e.g. radiology). In particular, it is felt that States participating in the IAEA's technical cooperation Model Project on Upgrading Radiation and Waste Safety Infrastructure would benefit significantly from the availability of practice specific guidance. Member States could then more readily develop their own guidance tailored to their own
Offshore wind farm siting procedures applied offshore of Block Island, Rhode Island
O'Reilly, Christopher M.
land. The REZ area is chosen as test site for the algorithm, and an optimal layout for the 5 turbines is found and discussed. Similarly the FAA tool is applied to the Block Island airport demonstrating the complexity of the FAA exclusionary area, and defining the limits of the exclusionary areas. The FAA regulation model is a geometric model in which all major (FAA) regulations within RI and the RI topography are embedded. The user specifies the dimension of the proposed turbines and an airport of interest, and a map of exclusionary zones specific to the turbine height and rules applying to the airport is generated. The model is validated for the entire state of Rhode Island. The micro-siting model finds the optimum placement of each turbine for a given number of turbines within an area. It includes the aerodynamic constraints (loss in wind speed within the wake of a turbine) associated to the deployment of arrays of turbines and the cable interconnection cost. It is combined with the technical, ecological, and social constraints used in the RIOSAMP macro-siting tool to provide a comprehensive micro-siting tool. In the optimization algorithm, a simple wake model and turbine-clustering algorithm are combined with the WIFSI in an objective function; the objective function is optimized with a genetic algorithm (GA).
New interpretive procedure for whole rock U-Pb systems applied to the Vredefort crustal profile
Granulite grade Precambrian gneisses have usually undergone at least one period of strong U depletion. While rock U-Pb isotope studies can determine the time(s) of the severe depletion, and this work attempts to place such studies on a more rigorous footing. Two-stage U-Pb systems can be described in terms of one major, episodic differentiation into rocks with varying U/Pb ratios, while three-stage systems can be described by two such distinct episodes. Most of the Precambrian granulites that have been isotopically analyzed have histories too complex to be described as two-stage systems. However, it is demonstrated here that U-Pb data on whole rock suites can yield the complete U-Pb chemical history of a three-stage system (in terms of U/Pb ratios). For a suite of granulites, present-day 207Pb/204Pb and 206Pb/204Pb ratios and element concentration data allow these ratios to be calculated at a number of specific past times and plotted as an array. The degree of scatter in each of these 'past arrays' is graphed as a function of time. The point of least scatter denotes the age of the end of stage 2 in the history of the system. The array slope and the dating of the end of stage 2 also permit the beginning of stage 2 to be calculated. All other parameters in the system (U and Pb concentrations, Pb isotopic ratios) can now be determined for each individual rock throughout its history. The new interpretive method also distinguishes sensitively among various kindds of uranium fractionation which may have operated during the differentiation episodes. It is applied here to uranium-depleted granulites in the deeper part of the Vredefort crustal profile. The times of the two fractionating episodes are calculated at approx.3860 and approx.2760 m.y., respectively
Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2015-10-15
Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)
2010-07-01
... the cost in constant dollars of low NOX burner technology applied to Group 1, Phase I boilers. The... separated overfire air as applied to tangentially fired boilers, in lieu of low NOX burner technology for.... Average Capital Cost for Low NOX Burner Technology Applied to Group 1 Boilers The Administrator will...
Kang, Dong Gu, E-mail: littlewing@kins.re.kr [Korea Institute of Nuclear Safety, 62 Gwahak-ro, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); Chang, Soon Heung [Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)
2014-08-15
Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin.
Highlights: • The combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. • The safety assessment of OPR-1000 nuclear power plant for SBO accident is performed by applying the CDPP. • By estimating the offsite power restoration time appropriately, the SBO risk is reevaluated. • It is concluded that the CDPP is applicable to safety assessment of BDBAs without significant erosion of the safety margin. - Abstract: Station blackout (SBO) is a typical beyond design basis accident (BDBA) and significant contributor to overall plant risk. The risk analysis of SBO could be important basis of rulemaking, accident mitigation strategy, etc. Recently, studies on the integrated approach of deterministic and probabilistic method for nuclear safety in nuclear power plants have been done, and among them, the combined deterministic and probabilistic procedure (CDPP) was proposed for safety assessment of the BDBAs. In the CDPP, the conditional exceedance probability obtained by the best estimate plus uncertainty method acts as go-between deterministic and probabilistic safety assessments, resulting in more reliable values of core damage frequency and conditional core damage probability. In this study, the safety assessment of OPR-1000 nuclear power plant for SBO accident was performed by applying the CDPP. It was confirmed that the SBO risk should be reevaluated by eliminating excessive conservatism in existing probabilistic safety assessment to meet the targeted core damage frequency and conditional core damage probability. By estimating the offsite power restoration time appropriately, the SBO risk was reevaluated, and it was finally confirmed that current OPR-1000 system lies in the acceptable risk against the SBO. In addition, it is concluded that the CDPP is applicable to safety assessment of BDBAs in nuclear power plants without significant erosion of the safety margin
This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a
Levin, Michael E.; Hayes, Steven C.; Waltz, Thomas
2010-01-01
The Implicit Relational Assessment Procedure (IRAP) is a promising tool for measuring implicit cognitions in applied research. However, the need for training and block effects can limit its capacity to assess effects with individual stimuli and participants, both of which are important for applied research. We developed a modified IRAP, the Mixed…
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to...
Many efforts are made to develop methodologies and tools that can give a great improvement in quality and safety, which are especially important in critical fields like the Severe Accident Management. This paper presents the study to verify the feasibility of the application of a structured approach in procedure automation, as the one offered by DIAM package, to a sample of SAMG procedures, with the final goal to assess the possibility to automate the SAMG procedure management. This study has been carried out using, as input, generic SAMGs recommended by a reactor vendor and, as a computerized tool, DIAM. DIAM is a methodology and tool to represent procedures, that offers many automatisms during the editing phase, allowing to generate procedures always aligned to the standard. For the operation phase, it offers operator guide support and allows procedure automation, when the detail level to which procedures are represented is appropriate and DIAM is properly connected to the field or to a simulator. In this feasibility study the possibility to utilize structured approaches to represent and use SAMGs has been verified. The two guidelines, object of the study, have been formally restructured and reformatted and then, implemented according to DIAM methodology. They can be, now, used either with the guidance of DIAM Mate (operator guide module of DIAM) or in a traditional way using the output of DIAM Doc (contained in a separate document). After a description of DIAM (Development and Implementation of Accident Management procedures) metaldehyde and graphical representation, this paper illustrates how DIAM has been successfully used in this sample application, describing the analyses and work required for the transition to the original version of the SAMGs to the restructured one. The paper contains only some meaningful samples of the original SAMGs and of their new format, as much as needed to understand the methodology and to perform comparisons. The main results are
Alpha spectrometry of the plutonium isotopes can be performed only after the perfect plutonium separation from other components of the matrix. So, till now numerous procedures have been elaborated and tested. The communication presents comparison of the plutonium content determination in soil, bones, eggshells and in the reference materials obtained by alpha spectrometry combined with two different separation procedures. The samples were mineralized in the concentrated HCl or HF prior to plutonium electrodeposition or coprecipitation with NdF3. Some other details were also tested in various variants. Quality of the spectra is discussed in terms of all these pre-treatment methods
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
Sugii, Yuh; Kasai, Tomonari; Ikeda, Masashi; Vaidyanath, Arun; Kumon, Kazuki; Mizutani, Akifumi; Seno, Akimasa; Tokutaka, Heizo; Kudoh, Takayuki; Seno, Masaharu
2016-01-01
To identify cell-specific markers, we designed a DNA microarray platform with oligonucleotide probes for human membrane-anchored proteins. Human glioma cell lines were analyzed using microarray and compared with normal and fetal brain tissues. For the microarray analysis, we employed a spherical self-organizing map, which is a clustering method suitable for the conversion of multidimensional data into two-dimensional data and displays the relationship on a spherical surface. Based on the gene expression profile, the cell surface characteristics were successfully mirrored onto the spherical surface, thereby distinguishing normal brain tissue from the disease model based on the strength of gene expression. The clustered glioma-specific genes were further analyzed by polymerase chain reaction procedure and immunocytochemical staining of glioma cells. Our platform and the following procedure were successfully demonstrated to categorize the genes coding for cell surface proteins that are specific to glioma cells. Our assessment demonstrates that a spherical self-organizing map is a valuable tool for distinguishing cell surface markers and can be employed in marker discovery studies for the treatment of cancer. PMID:26966393
Baioco, Juliana Souza; Seckler, Carolina dos Santos; Silva, Karinna Freitas da; Jacob, Breno Pinheiro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Metodos Computacionais e Sistemas Offshore; Silvestre, Jose Roberto; Soares, Antonio Claudio; Freitas, Sergio Murilo Santos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas
2008-07-01
The perforation process is an important in well construction. It provides contact between the reservoir rock and the well, allowing oil production. The procedure consists in using explosive charges to bore a hole into the casing and the rock, so that the reservoir fluid can flow to the well. Therefore, the right choice of both the gun and the charge type is extremely important, knowing that many factors influence on the process, affecting the productivity, such as shot density, penetration depth, hole diameter, etc. The objective of this paper is to present the results of some parametric study to evaluate the influence of some parameters related to the explosive charges on well productivity, since there are many types of charges with different properties, which provide specific characteristics to the perforated area. For that purpose, a commercial program will be used, which allows the simulation of the flow problem, along with a finite element mesh generator that uses a pre-processor and a program that enables the construction of reservoir, well and perforation models. It can be observed that the penetration depth has bigger influence than the hole diameter, being an important factor when choosing the charge to be used in the project. (author)
Crapse, K.; Cozzi, A.; Crawford, C.; Jurgensen, A.
2006-09-30
In order to assess the effect of extended curing times at elevated temperatures on saltstone containing Tank 48H waste, saltstone samples prepared as a part of a separate study were analyzed for benzene using a modification of the United States Environmental Protection Agency (USEPA) method 1311 Toxicity Characteristic Leaching Procedure (TCLP). To carry out TCLP for volatile organic analytes (VOA), such as benzene, in the Savannah River National Laboratory (SRNL) shielded cells (SC), a modified TCLP Zero-Headspace Extractor (ZHE) was developed. The modified method was demonstrated to be acceptable in a side by side comparison with an EPA recommended ZHE using nonradioactive saltstone containing tetraphenylborate (TPB). TCLP results for all saltstone samples tested containing TPB (both simulant and actual Tank 48H waste) were below the regulatory limit for benzene (0.5 mg/L). In general, higher curing temperatures corresponded to higher concentrations of benzene in TCLP extract. The TCLP performed on the simulant samples cured under the most extreme conditions (3000 mg/L TPB in salt and cured at 95 C for at least 144 days) resulted in benzene values that were greater than half the regulatory limit. Taking into account that benzene in TCLP extract was measured on the same order of magnitude as the regulatory limit, that these experimental conditions may not be representative of actual curing profiles found in the saltstone vault and that there is significant uncertainty associated with the precision of the method, it is recommended that to increase confidence in TCLP results for benzene, the maximum curing temperature of saltstone be less than 95 C. At this time, no further benzene TCLP testing is warranted. Additional verification would be recommended, however, should future processing strategies result in significant changes to salt waste composition in saltstone as factors beyond the scope of this limited study may influence the decomposition of TPB in saltstone.
The T-matrix averaging procedure advocated by Burke, Berrington and Sukumar [1981, J. Phys. B. At. Mol. Phys. 14, 289] is demonstrated to hold in a class of soluble models for two different L2 basis expansions. The convergence rates as the bases are extended to completeness are determined. (author)
AIM: To compare and contrast interventional radiology (IR) clinical and research practices with the technology assessment and evidence-based medicine (EBM) paradigms and make suggestions for the phased evaluation of new IR procedures. MATERIALS AND METHODS: Course literature of the Association of University Radiologists' 'Basic Technology Assessment for Radiologists' course and the McMaster University Health Information Research Unit's 'How to Teach Evidence-Based Medicine 1999' course were used to identify major publications in each discipline. A computer search was performed to seek other relevant literature. A model of traditional development of IR procedures was developed. Suggestions for the phased evaluation of IR procedures were derived. RESULTS: As in diagnostic radiology, several levels of progressively stronger IR study design can be described and related to EBM 'levels of evidence'. These range from case reports and case series through case-control and cohort studies to randomized controlled trials (RCTs). The major weakness in the existing IR literature is the predominance of small, uncontrolled, case series. Randomized controlled trials are likely to provide the best possible evidence of effectiveness. They are expensive and randomization is sometimes unethical or impractical. Case-control and cohort studies have been under-utilized. Evidence-based medicine indices of benefit and harm have not yet been applied in IR and may have clinical advantages over traditional statistical methods. A literature search (10 years) using MeSH terms 'radiology, interventional' and 'efficacy' yielded 30 papers. Combining 'radiology, interventional' and 'evidence-based medicine' yielded no papers. Comparative searches substituting the term 'diagnostic imaging' for 'radiology, interventional' yielded 4883 and 62 papers, respectively. CONCLUSION: Principles of technology assessment and EBM can be applied to the investigation of new IR procedures. A tool is needed to
We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs
Averaging anisotropic cosmologies
We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of anisotropic pressure-free models. Adopting the Buchert scheme, we recast the averaged scalar equations in Bianchi-type form and close the standard system by introducing a propagation formula for the average shear magnitude. We then investigate the evolution of anisotropic average vacuum models and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. The presence of nonzero average shear in our equations also allows us to examine the constraints that a phase of backreaction-driven accelerated expansion might put on the anisotropy of the averaged domain. We close by assessing the status of these and other attempts to define and calculate 'average' spacetime behaviour in general relativity
Bouyer, Patricia; Markey, Nicolas; Randour, Mickael; Larsen, Kim G.; Laursen, Simon
2015-01-01
Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this ...
On the way towards a generalized entropy maximization procedure
Bagci, G. Baris; Tirnakli, Ugur
2008-01-01
We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q is between [0,1] in contrast to the stationary distribution of the inverse power law ob...
Gramkow, Claus
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Van Essen, H.
2004-01-01
This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to th...
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and interchangibility is O(\\epsilon^2) equivalent to the outcome of the corresponding homogeneous model, where \\epsilon is the level of heterogeneity. We then use this averaging pr...
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... natural approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Averaged extreme regression quantile
Jureckova, Jana
2015-01-01
Various events in the nature, economics and in other areas force us to combine the study of extremes with regression and other methods. A useful tool for reducing the role of nuisance regression, while we are interested in the shape or tails of the basic distribution, is provided by the averaged regression quantile and namely by the average extreme regression quantile. Both are weighted means of regression quantile components, with weights depending on the regressors. Our primary interest is ...
Gauge-Invariant Average of Einstein Equations for finite Volumes
Smirnov, Juri
2014-01-01
For the study of cosmological backreacktion an avaragng procedure is required. In this work a covariant and gauge invariant averaging formalism for finite volumes will be developed. This averaging will be applied to the scalar parts of Einstein's equations. For this purpose dust as a physical laboratory will be coupled to the gravitating system. The goal is to study the deviation from the homogeneous universe and the impact of this deviation on the dynamics of our universe. Fields of physical observers are included in the studied system and used to construct a reference frame to perform the averaging without a formal gauge fixing. The derived equations resolve the question whether backreaction is gauge dependent.
Correlations between the most developed (G7) countries. A moving average window size optimisation
Miskiewicz, J; Miskiewicz, Janusz; Ausloos, Marcel
2005-01-01
Different distance matrices are defined and applied to look for correlations between the gross domestic product of G7 countries. Results are illustrated through displays obtained from various graph methods. Significant similarities between results are obtained. A procedure for choosing the best distance function is proposed taking into account the size of the window in which correlation are averaged.
Correlations Between the Most Developed (G7) Countries. A Moving Average Window Size Optimisation
Different distance matrices are defined and applied to look for correlations between the gross domestic product of G7 countries. Results are illustrated through displays obtained from various graph methods. Significant similarities between results are obtained. A procedure for choosing the best distance function is proposed taking into account the size of the window in which correlations are averaged. (author)
Averaging anisotropic cosmologies
Barrow, J D; Barrow, John D.; Tsagas, Christos G.
2006-01-01
We examine the effects of spatial inhomogeneities on irrotational anisotropic cosmologies by looking at the average properties of pressure-free Bianchi-type models. Adopting the Buchert averaging scheme, we identify the kinematic backreaction effects by focussing on spacetimes with zero or isotropic spatial curvature. This allows us to close the system of the standard scalar formulae with a propagation equation for the shear magnitude. We find no change in the already known conditions for accelerated expansion. The backreaction terms are expressed as algebraic relations between the mean-square fluctuations of the models' irreducible kinematical variables. Based on these we investigate the early evolution of averaged vacuum Bianchi type $I$ universes and those filled with pressureless matter. In the latter case we show that the backreaction effects can modify the familiar Kasner-like singularity and potentially remove Mixmaster-type oscillations. We also discuss the possibility of accelerated expansion due to ...
Essén, H
2003-01-01
This paper addresses the problem of the separation of rotational and internal motion. It introduces the concept of average angular velocity as the moment of inertia weighted average of particle angular velocities. It extends and elucidates the concept of Jellinek and Li (1989) of separation of the energy of overall rotation in an arbitrary (non-linear) $N$-particle system. It generalizes the so called Koenig's theorem on the two parts of the kinetic energy (center of mass plus internal) to three parts: center of mass, rotational, plus the remaining internal energy relative to an optimally translating and rotating frame.
Carrillo, Rafael E; Wiaux, Yves
2013-01-01
Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013) introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.
Fouz, M. C.; Puerta Pelayo, J.
2004-07-01
In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.
Martin, Jill J.; Skinner, Christopher H.; Neddenriep, Christine E.
2001-01-01
Investigates the impact of the interspersal procedure on students' reading tasks. Students read out loud two passages, with one containing additional interspersed brief, easy paragraphs. Students selected the passage that would require the least effort and the least time to read. Results suggest that the interspersal procedure may improve…
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong to...
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced with its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of \\emph{differentiability} and \\emph{interchangibility}, is $O(\\epsilon^2)$ equivalent to the outcome of the corresponding homogeneous model, where $\\epsilon$ is the level of heterogeneity. We then us...
A novel approach for the averaging of magnetocardiographically recorded heart beats
DiPietroPaolo, D [Advanced Technologies Biomagnetics, Pescara (Italy); Mueller, H-P [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany); Erne, S N [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany)
2005-05-21
Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.
Robust Averaging Level Control
Rosander, Peter; Isaksson, Alf; Löfberg, Johan; Forsman, Krister
2011-01-01
Frequent inlet ﬂow changes typically cause problems for averaging level controllers. For a frequently changing inlet ﬂow the upsets do not occur when the system is in steady state and the tank level at its set-point. For this reason the tuning of the level controller gets quite complicated, since not only the size of the upsets but also the time in between them relative to the hold up of the tank have to be considered. One way to obtain optimal ﬂow ﬁltering while directly accounting for futur...
Negative Average Preference Utilitarianism
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Basics of averaging of the Maxwell equations
Chipouline, A; Tretyakov, S
2011-01-01
Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for metamaterials, which is rather close to the case of compound materials but should include magnetic response of the inclusi...
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
DENG MaoLin; ZHU WeiQiu
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for uasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics. In the present paper, the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced. The applications of the stochastic averaging method in studying the dynamics of active Brownian particles, the reaction rate theory, the dynamics of breathing and denaturation of DNA, and the Fermi resonance and its effect on the mean transition time are re-viewed.
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
无
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics.In the present paper,the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced.The applications of the stochastic averaging method in studying the dynamics of active Brownian particles,the reaction rate theory,the dynamics of breathing and denaturation of DNA,and the Fermi resonance and its effect on the mean transition time are reviewed.
Nonequilibrium statistical averages and thermo field dynamics
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
Average nuclear surface properties
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Felipe Scalabrin
2015-06-01
Full Text Available This article is aimed to clarify the doubts about the Civil Procedure Code article 475-J and its moment of incidence, mainly considering the approval of the precedent nº 517 of the Superior Court of Justice.
Sattsangi, Prem D.
2014-01-01
A laboratory method for teaching inorganic qualitative analysis and chemical equations is described. The experiment has been designed to focus attention on cations and anions that react to form products. This leads to a logical approach to understand and write chemical equations. The procedure uses 3 mL plastic micropipettes to store and deliver…
NONE
1998-12-01
On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordance with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.
Quantum Averaging of Squeezed States of Light
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
Improving consensus structure by eliminating averaging artifacts
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Grabowski, I; Della Sala, F
2013-01-01
We present a simple and non-empirical method to determine optimal scaling coefficients, within the (spin-component)-scaled MP2 approach, for calculating intermolecular potential energies of noncovalently-interacting systems. The method is based on an observed proportionality between (spin-component) MP2 and CCSD(T) energies for a wide range of intermolecular distances and allows to compute with high accuracy a large portion of the dissociation curve at the cost of a single CCSD(T) calculation. The accuracy of the present procedure is assessed for a series of noncovalently-interacting test systems: the obtained results reproduce CCSD(T) quality in all cases and definitely outperform conventional MP2, CCSD and SCS-MP2 results. The difficult case of the Beryllium dimer is also considered.
AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft ExcelTMwas used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS ExcelTMversion can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E
Basics of averaging of the Maxwell equations for bulk materials
Chipouline, A.; Simovski, C.; Tretyakov, S.
2012-01-01
Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some b...
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Average Shape of Transport-Limited Aggregates
Davidovitch, Benny; Choi, Jaehyuk; Bazant, Martin Z.
2005-08-01
We study the relation between stochastic and continuous transport-limited growth models. We derive a nonlinear integro-differential equation for the average shape of stochastic aggregates, whose mean-field approximation is the corresponding continuous equation. Focusing on the advection-diffusion-limited aggregation (ADLA) model, we show that the average shape of the stochastic growth is similar, but not identical, to the corresponding continuous dynamics. Similar results should apply to DLA, thus explaining the known discrepancies between average DLA shapes and viscous fingers in a channel geometry.
Multigrid solution for the compressible Euler equations by an implicit characteristic-flux-averaging
Kanarachos, A.; Vournas, I.
A formulation of an implicit characteristic-flux-averaging method for the compressible Euler equations combined with the multigrid method is presented. The method is based on correction scheme and implicit Gudunov type finite volume scheme and is applied to two dimensional cases. Its principal feature is an averaging procedure based on the eigenvalue analysis of the Euler equations by means of which the fluxes are evaluated at the finite volume faces. The performance of the method is demonstrated for different flow problems around RAE-2922 and NACA-0012 airfoils and an internal flow over a circular arc.
For detailed 3-D thermal analysis of high-level waste repositories a computer program TASTE (Three-dimensional Analysis of Salt dome Temperatures) is under development, based on an analytical model of a continuous time-dependent point source in an infinite solid of homogeneous isotropic material with temperature-independent properties. The program is based on the assumption that the high-level waste will be disposed of in a number of boreholes placed in a square, rectangular or hexagonal pattern in one or more burial layers. Heat generation, borehole pitch and length, burial layer area, relative distance between the layers, loading sequence and loading tempo can be varied arbitrarily. Preliminary versions of the program were applied to establish the influence of the following variables relevant to the temperature rise distribution in a high-level waste burial area in a salt dome: (1) Disposal borehole patterns: it was established that the influence on the temperature is very limited. Hence quite some flexibility is allowed with respect to the disposal pattern to be chosen. (2) Loading tempi: it was established that the maxima of the temperature rises are hardly influenced by differences in loading tempi. For a multi-layer burial configuration an underlying burial area may be judged to give no problems for the disposal operations in an overlying burial area, even with a very slow loading rate. (3) Leaving certain borehole positions unused: not utilizing certain borehole positions has a very positive effect on the reduction of local temperature rises. (author)
Average beta measurement in EXTRAP T1
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, ΒΘ, is derived. A method for unobtrusively measuring the quantities used to evaluate ΒΘ in Extrap T1 is described. The results if a series of measurements yielding ΒΘ as a function of externally applied toroidal field are presented. (author)
Probability density function transformation using seeded localized averaging
Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)
Small scale magnetic flux-averaged magnetohydrodynamics
By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits
A proxy-variable search procedure
Jaqueson K. Galimberti
2009-01-01
This paper proposes a proxy-variable search procedure, based on a sensitivity analysis framework, aiming to provide a useful tool for the applied researcher whenever he faces measurement or proxy-variable uncertainties. Extending from the sensitivity analysis literature it proposes two main methodological innovations. The first relates to the usage of a proxies grouping process to obtain averaged coefficient estimators for theoretical explanatory variables that have more than one possible mea...
Average Range and Network Synchronizability
The influence of structural properties of a network on the network synchronizability is studied by introducing a new concept of average range of edges. For both small-world and scale-free networks, the effect of average range on the synchronizability of networks with bounded or unbounded synchronization regions is illustrated through numerical simulations. The relations between average range, range distribution, average distance, and maximum betweenness are also explored, revealing the effects of these factors on the network synchronizability of the small-world and scale-free networks, respectively. (general)
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
On the average pairing energy in nuclei
The macroscopic-microscopic method is applied to calculate the nuclear energies, especially the microscopic shell and pairing corrections. The single-particle levels are obtained with the Yukawa folded mean-field potential. The macroscopic energy is evaluated using the Lublin-Strasbourg Drop model. The shell corrections are obtained using the Strutinsky method with smoothing in nucleon number space. The average nuclear pairing energy is also determined by folding the BCS sums in nucleon number space. The average pairing energy dependence on the nuclear elongation is investigated. (author)
"Pricing Average Options on Commodities"
Kenichiro Shiraya; Akihiko Takahashi
2010-01-01
This paper proposes a new approximation formula for pricing average options on commodities under a stochastic volatility environment. In particular, it derives an option pricing formula under Heston and an extended lambda-SABR stochastic volatility models (which includes an extended SABR model as a special case). Moreover, numerical examples support the accuracy of the proposed average option pricing formula.
Using Bayes Model Averaging for Wind Power Forecasts
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Thomson scattering in the average-atom approximation
Johnson, W. R.; Nilsen, J.; Cheng, K. T.
2012-01-01
The average-atom model is applied to study Thomson scattering of x-rays from warm-dense matter with emphasis on scattering by bound electrons. Parameters needed to evaluate the dynamic structure function (chemical potential, average ionic charge, free electron density, bound and continuum wave-functions and occupation numbers) are obtained from the average-atom model. The resulting analysis provides a relatively simple diagnostic for use in connection with x-ray scattering measurements. Appli...
UNEMPLOYMENT BENEFIT, MINIMUM WAGE AND AVERAGE SALARY EARNINGS IN ROMANIA
2012-01-01
The existence of a long-run equilibrium between average salary earnings and labour market public institutions, such as unemployment benefit and minimum wage, is checked using ARDL bounds testing procedure. The results pointed out that long-run causality runs from average salary earnings to labour market public institutions and not vice versa. The short-run dynamics are depicted as well.
Papageorgiou, Nikolaos S
2009-01-01
Offers an examination of important theoretical methods and procedures in applied analysis. This book details the important theoretical trends in nonlinear analysis and applications to different fields. It is suitable for those working on nonlinear analysis.
Average Light Intensity Inside a Photobioreactor
Herby Jean
2011-01-01
Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.
Power convergence of Abel averages
Kozitsky, Yuri; Shoikhet, David; Zemanek, Jaroslav
2012-01-01
Necessary and sufficient conditions are presented for the Abel averages of discrete and strongly continuous semigroups, $T^k$ and $T_t$, to be power convergent in the operator norm in a complex Banach space. These results cover also the case where $T$ is unbounded and the corresponding Abel average is defined by means of the resolvent of $T$. They complement the classical results by Michael Lin establishing sufficient conditions for the corresponding convergence for a bounded $T$.
The goals of the High-Average-Power Laser Program at LLNL are to develop a broad technology base for solid state lasers and to demonstrate high-average-power laser operation with more efficiency and higher beam quality than has been possible with current technology. Major activities are the zig-zag laser testbed and the gas-cooled-slab laser test bed. This section describes these activities as well as discussion of material development; nonlinear optics; laser materials, and applications
Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei
2016-09-01
In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception. PMID:20129047
Sparsity Averaging for Compressive Imaging
Carrillo, Rafael E; Van De Ville, Dimitri; Thiran, Jean-Philippe; Wiaux, Yves
2012-01-01
We propose a novel regularization method for sparse image reconstruction from compressive measurements. The approach relies on the conjecture that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted $\\ell_1$ scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We test our prior and the associated algorithm through extensive numerical simulations for spread spectrum and Gaussian acquisition schemes suggested by the recent theory of compressed sensing with coherent and redundant dictionaries. Our results show that average sparsity outperforms state-of-the-art priors that promote sparsity in a single orthonormal basis or redundant frame, or that promote gradient sparsity. We also illustrate the performance of SARA in the context of Fourier imaging, for particular applications in astronomy and medicine.
On generalized averaged Gaussian formulas
Spalevic, Miodrag M.
2007-09-01
We present a simple numerical method for constructing the optimal (generalized) averaged Gaussian quadrature formulas which are the optimal stratified extensions of Gauss quadrature formulas. These extensions exist in many cases in which real positive Kronrod formulas do not exist. For the Jacobi weight functions w(x)equiv w^{(alpha,beta)}(x)D(1-x)^alpha(1+x)^beta ( alpha,beta>-1 ) we give a necessary and sufficient condition on the parameters alpha and beta such that the optimal averaged Gaussian quadrature formulas are internal.
Average neutron detection efficiency for DEMON detectors
The neutron detection efficiency of a DEMON detector, averaged over the whole volume, was calculated using GEANT and applied to determine neutron multiplicities in an intermediate heavy ion reaction. When a neutron source is set at a distance of about 1 m from the front surface of the detector, the average efficiency, ϵav, is found to be significantly lower (20–30%) than the efficiency measured at the center of the detector, ϵ0. In the GEANT simulation the ratio R=ϵav/ϵ0 was calculated as a function of neutron energy. The experimental central efficiency multiplied by R was then used to determine the average efficiency. The results were applied to a study of the 64Zn+112Sn reaction at 40 A MeV which employed 16 DEMON detectors. The neutron multiplicity was extracted using a moving source fit. The derived multiplicities are compared well with those determined using the neutron ball in the NIMROD detector array in a separate experiment. Both are in good agreement with multiplicities predicted by a transport model calculation using an antisymmetric molecular dynamics (AMD) model code
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
From cellular doses to average lung dose
Sensitive basal and secretory cells receive a wide range of doses in human bronchial and bronchiolar airways. Variations of cellular doses arise from the location of target cells in the bronchial epithelium of a given airway and the asymmetry and variability of airway dimensions of the lung among airways in a given airway generation and among bronchial and bronchiolar airway generations. To derive a single value for the average lung dose which can be related to epidemiologically observed lung cancer risk, appropriate weighting scenarios have to be applied. Potential biological weighting parameters are the relative frequency of target cells, the number of progenitor cells, the contribution of dose enhancement at airway bifurcations, the promotional effect of cigarette smoking and, finally, the application of appropriate regional apportionment factors. Depending on the choice of weighting parameters, detriment-weighted average lung doses can vary by a factor of up to 4 for given radon progeny exposure conditions. (authors)
Statistics on exponential averaging of periodograms
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
Stochastic Approximation with Averaging Innovation
Laruelle, Sophie
2010-01-01
The aim of the paper is to establish a convergence theorem for multi-dimensional stochastic approximation in a setting with innovations satisfying some averaging properties and to study some applications. The averaging assumptions allow us to unify the framework where the innovations are generated (to solve problems from Numerical Probability) and the one with exogenous innovations (market data, output of "device" $e.g.$ an Euler scheme) with stationary or ergodic properties. We propose several fields of applications with random innovations or quasi-random numbers. In particular we provide in both setting a rule to tune the step of the algorithm. At last we illustrate our results on five examples notably in Finance.
High average power supercontinuum sources
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Michel Parameters averages and interpretation
The new measurements of Michel parameters in τ decays are combined to world averages. From these measurements model independent limits on non-standard model couplings are derived and interpretations in the framework of specific models are given. A lower limit of 2.5 tan β GeV on the mass of a charged Higgs boson in models with two Higgs doublets can be set and a 229 GeV limit on a right-handed W-boson in left-right symmetric models (95 % c.l.)
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Basics of averaging of the Maxwell equations for bulk materials
Chipouline, A; Tretyakov, S
2012-01-01
Volume or statistical averaging of the microscopic Maxwell equations (MEs), i.e. transition from microscopic MEs to their macroscopic counterparts, is one of the main steps in electrodynamics of materials. In spite of the fundamental importance of the averaging procedure, it is quite rarely properly discussed in university courses and respective books; up to now there is no established consensus about how the averaging procedure has to be performed. In this paper we show that there are some basic principles for the averaging procedure (irrespective to what type of material is studied) which have to be satisfied. Any homogenization model has to be consistent with the basic principles. In case of absence of this correlation of a particular model with the basic principles the model could not be accepted as a credible one. Another goal of this paper is to establish the averaging procedure for bulk MM, which is rather close to the case of compound materials but should include magnetic response of the inclusions an...
Spatial averaging infiltration model for layered soil
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
On Backus average for generally anisotropic layers
Bos, Len; Slawinski, Michael A; Stanoev, Theodore
2016-01-01
In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Averaging in the presence of sliding errors
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Flux-Averaged and Volume-Averaged Concentrations in Continuum Approaches to Solute Transport
Parker, J. C.; van Genuchten, M. Th.
1984-07-01
Transformations between volume-averaged pore fluid concentrations and flux-averaged concentrations are presented which show that both modes of concentration obey convective-dispersive transport equations of identical mathematical form for nonreactive solutes. The pertinent boundary conditions for the two modes, however, do not transform identically. Solutions of the convection-dispersion equation for a semi-infinite system during steady flow subject to a first-type inlet boundary condition is shown to yield flux concentrations, while solutions subject to a third-type boundary condition yield volume-averaged concentrations. These solutions may be applied with reasonable impunity to finite as well as semi-infinite media if back mixing at the exit is precluded. Implications of the distinction between resident and flux concentrations to laboratory and field studies of solute transport are discussed. It is suggested that perceived limitations of the convection-dispersion model for media with large variations in pore water velocities may in certain cases be attributable to a failure to distinguish between volume-averaged and flux-averaged concentrations.
Averaging along Uniform Random Integers
Janvresse, Élise
2011-01-01
Motivated by giving a meaning to "The probability that a random integer has initial digit d", we define a URI-set as a random set E of natural integers such that each n>0 belongs to E with probability 1/n, independently of other integers. This enables us to introduce two notions of densities on natural numbers: The URI-density, obtained by averaging along the elements of E, and the local URI-density, which we get by considering the k-th element of E and letting k go to infinity. We prove that the elements of E satisfy Benford's law, both in the sense of URI-density and in the sense of local URI-density. Moreover, if b_1 and b_2 are two multiplicatively independent integers, then the mantissae of a natural number in base b_1 and in base b_2 are independent. Connections of URI-density and local URI-density with other well-known notions of densities are established: Both are stronger than the natural density, and URI-density is equivalent to log-density. We also give a stochastic interpretation, in terms of URI-...
Bootstrapping pre-averaged realized volatility under market microstructure noise
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995)) is...... valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure that...
Nicita, Alessandro
2008-01-01
Price responses are usually estimated for the average household. However, different households are unlikely to respond in a similar way to movement in prices. Consequently, relying on averages may be misleading when examining the behaviour of a particular group of households such as the poor. This article uses six household surveys collected in Mexico between 1989 and 2000 to derive price responses for 10 product groups and for five levels of income households. The estimated price elasticitie...
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Computation of the average energy for LXY electrons
The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs
Time Series ARIMA Models of Undergraduate Grade Point Average.
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Thermodynamic properties of average-atom interatomic potentials for alloys
Nöhring, Wolfram Georg; Curtin, William Arthur
2016-05-01
The atomistic mechanisms of deformation in multicomponent random alloys are challenging to model because of their extensive structural and compositional disorder. For embedded-atom-method interatomic potentials, a formal averaging procedure can generate an average-atom EAM potential and this average-atom potential has recently been shown to accurately predict many zero-temperature properties of the true random alloy. Here, the finite-temperature thermodynamic properties of the average-atom potential are investigated to determine if the average-atom potential can represent the true random alloy Helmholtz free energy as well as important finite-temperature properties. Using a thermodynamic integration approach, the average-atom system is found to have an entropy difference of at most 0.05 k B/atom relative to the true random alloy over a wide temperature range, as demonstrated on FeNiCr and Ni85Al15 model alloys. Lattice constants, and thus thermal expansion, and elastic constants are also well-predicted (within a few percent) by the average-atom potential over a wide temperature range. The largest differences between the average atom and true random alloy are found in the zero temperature properties, which reflect the role of local structural disorder in the true random alloy. Thus, the average-atom potential is a valuable strategy for modeling alloys at finite temperatures.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Ultra-low noise miniaturized neural amplifier with hardware averaging
Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.
2015-08-01
Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (<3 μVrms 700 Hz-7 kHz), thereby requiring a baseline noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating <1.5 μVrms total recording baseline noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-01-01
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure. PMID:23658214
78 FR 10262 - Railroad Cost Recovery Procedures-Productivity Adjustment
2013-02-13
... Surface Transportation Board Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY... of average change in railroad productivity for the 2007-2011 (5-year) averaging period....
These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Averaging Methods for Design of Spacecraft Hysteresis Damper
Ricardo Gama
2013-01-01
Full Text Available This work deals with averaging methods for dynamics of attitude stabilization systems. The operation of passive gravity-gradient attitude stabilization systems involving hysteresis rods is described by discontinuous differential equations. We apply recently developed averaging techniques for discontinuous system in order to simplify its analysis and to perform parameter optimization. The results obtained using this analytic method are compared with those of numerical optimization.
Newhouse, Vernon L
1975-01-01
Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec
... Goggins Lab Sol Goldman Center Discussion Board The Whipple Procedure Also called a pancreaticoduodenectomy, which is generally ... operation was first described by .Dr. Alan O. Whipple of New York Memorial Hospital (now called Memorial ...
A space-averaged model of branched structures
Lopez, Diego; Michelin, Sébastien
2014-01-01
Many biological systems and artificial structures are ramified, and present a high geometric complexity. In this work, we propose a space-averaged model of branched systems for conservation laws. From a one-dimensional description of the system, we show that the space-averaged problem is also one-dimensional, represented by characteristic curves, defined as streamlines of the space-averaged branch directions. The geometric complexity is then captured firstly by the characteristic curves, and secondly by an additional forcing term in the equations. This model is then applied to mass balance in a pipe network and momentum balance in a tree under wind loading.
Lucas, Spencer G.
Stratigraphy is a cornerstone of the Earth sciences. The study of layered rocks, especially their age determination and correlation, which are integral parts of stratigraphy, are key to fields as diverse as geoarchaeology and tectonics. In the Anglophile history of geology, in the early 1800s, the untutored English surveyor William Smith was the first practical stratigrapher, constructing a geological map of England based on his own applied stratigraphy. Smith has, thus, been seen as the first “industrial stratigrapher,” and practical applications of stratigraphy have since been essential to most of the extractive industries from mining to petroleum. Indeed, gasoline is in your automobile because of a tremendous use of applied stratigraphy in oil exploration, especially during the latter half of the twentieth century. Applied stratigraphy, thus, is a subject of broad interest to Earth scientists.
Logan, J David
2013-01-01
Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat
Park, W.C.; Hausen, D.M.; Hagni, R.D. (eds.)
1985-01-01
A conference on applied mineralogy was held and figures were presented under the following headings: methodology (including image analysis); ore genesis; exploration; beneficiations (including precious metals); process mineralogy - low and high temperatures; and medical science applications. Two papers have been abstracted separately.
Averages of Values of L-Series
Alkan, Emre; Ono, Ken
2013-01-01
We obtain an exact formula for the average of values of L-series over two independent odd characters. The average of any positive moment of values at s = 1 is then expressed in terms of finite cotangent sums subject to congruence conditions. As consequences, bounds on such cotangent sums, limit points for the average of first moment of L-series at s = 1 and the average size of positive moments of character sums related to the class number are deduced.
Dynamic speckle texture processing using averaged dimensions
Rabal, Héctor; Arizaga, Ricardo; Cap, Nelly; Trivi, Marcelo; Mavilio Nuñez, Adriana; Fernandez Limia, Margarita
2006-08-01
Dynamic speckle or biospeckle is a phenomenon generated by laser light scattering in biological tissues. It is also present in some industrial processes where the surfaces exhibit some kind of activity. There are several methods to characterize the dynamic speckle pattern activity. For quantitative measurements, the Inertia Moment of the co occurrence matrix of the temporal history of the speckle pattern (THSP) is usually used. In this work we propose the use of average dimensions (AD) for quantitative classifications of textures of THSP images corresponding to different stages of the sample. The AD method was tested in an experiment with the drying of paint, a non biological phenomenon that we usually use as dynamic speckle initial test. We have chosen this phenomenon because its activity can be followed in a relatively simple way by gravimetric measures and because its behaviour is rather predictable. Also, the AD was applied to numerically simulated THSP images and the performance was compared with other quantitative method. Experiments with biological samples are currently under development.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Bayesian Model Averaging in the Instrumental Variable Regression Model
Gary Koop; Robert Leon Gonzalez; Rodney Strachan
2011-01-01
This paper considers the instrumental variable regression model when there is uncertainly about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainly can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very fl...
Finding large average submatrices in high dimensional data
Shabalin, Andrey A.; Weigman, Victor J.; Perou, Charles M.; Nobel, Andrew B
2009-01-01
The search for sample-variable associations is an important problem in the exploratory analysis of high dimensional data. Biclustering methods search for sample-variable associations in the form of distinguished submatrices of the data matrix. (The rows and columns of a submatrix need not be contiguous.) In this paper we propose and evaluate a statistically motivated biclustering procedure (LAS) that finds large average submatrices within a given real-valued data matrix. ...
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Determination of average activating thermal neutron flux in bulk samples
A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared
The Animals (Scientific Procedures) (Procedure for Representations) Rules 1986
Her Majesty's Stationary Office
1986-01-01
Section 12 of the Animals (Scientific Procedures) (Procedure for Representations) Rules 1986 to make representations to a legally qualified person appointed by the Secretary of State on a person who has applied for or holds a personal or project licence or a certificate of designation of a scientific procedure, breeding or supplying establishment under that Act where the Secretary of State proposes to refuse such a licence or certificate or to vary or revoke it otherwise than at the re...
The European Bank has pledged in its Agreement to place environmental management at the forefront of its operations to promote sustainable economic development in central and eastern Europe. The Bank's environmental policy is set out in the document titled, Environmental Management: The Bank's Policy Approach. This document, Environmental Procedures, presents the procedures which the European Bank has adopted to implement this policy approach with respect to its operations. The environmental procedures aim to: ensure that throughout the project approval process, those in positions of responsibility for approving projects are aware of the environmental implications of the project, and can take these into account when making decisions; avoid potential liabilities that could undermine the success of a project for its sponsors and the Bank; ensure that environmental costs are estimated along with other costs and liabilities; and identify opportunities for environmental enhancement associated with projects. The review of environmental aspects of projects is conducted by many Bank staff members throughout the project's life. This document defines the responsibilities of the people and groups involved in implementing the environmental procedures. Annexes contain Environmental Management: The Bank's Policy Approach, examples of environmental documentation for the project file and other ancillary information
Schiehlen, Werner
2014-01-01
Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.
The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed
A Favré averaged transition prediction model for hypersonic flows
LEE; ChunHian
2010-01-01
Transition prediction is crucial for aerothermodynamic and thermal protection system design of hypersonic vehicles.The compressible form of laminar kinetic energy equation is derived based on Favréaverage formality in the present paper.A closure of the equation is deduced and simplified under certain hypotheses and scaling analysis.A laminar-to-turbulent transition prediction procedure is proposed for high Mach number flows based on the modeled Favré-averaged laminar kinetic energy equation,in conjunction with the Favré-averaged Navier-Stokes equations.The proposed model,with and without associated explicit compressibility terms,is then applied to simulate flows over flared-cones with a free-stream Mach number of 5.91,and the onset locations of the boundary layer transition under different wall conditions are estimated.The computed onset locations are compared with those obtained by the model based on a compressibility correction deduced from the reference-temperature concept,together with experimental data.It is revealed that the present model gives a more favorable transition prediction for hypersonic flows.
Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher
2009-01-01
define procedures as mechanisms that influence the probabilities of reaching different endnodes. We show that for such procedural games a sequential psychological equilibrium always exists. Applying this approach within a principal-agent context we show that the way less attractive jobs are allocated is...... crucial for the effort exerted by agents. This prediction is tested in a field experiment, where some subjects had to type in data, whereas others had to verify the data inserted by the typists. The controllers' wage was 50% higher than that of the typists. In one treatment the less attractive typists...
Behavioral implications of shortlisting procedures
Tyson, Christopher J.
2012-01-01
We consider two-stage "shortlisting procedures" in which the menu of alternatives is first pruned by some process or criterion and then a binary relation is maximized. Given a particular first-stage process, our main result supplies a necessary and sufficient condition for choice data to be consistent with a procedure in the designated class. This result applies to any class of procedures with a certain lattice structure, including the cases of "consideration filters," "satisficing with salie...
Inversion of the circular averages transform using the Funk transform
The integral of a function defined on the half-plane along the semi-circles centered on the boundary of the half-plane is known as the circular averages transform. Circular averages transform arises in many tomographic image reconstruction problems. In particular, in synthetic aperture radar (SAR) when the transmitting and receiving antennas are colocated, the received signal is modeled as the integral of the ground reflectivity function of the illuminated scene over the intersection of spheres centered at the antenna location and the surface topography. When the surface topography is flat the received signal becomes the circular averages transform of the ground reflectivity function. Thus, SAR image formation requires inversion of the circular averages transform. Apart from SAR, circular averages transform also arises in thermo-acoustic tomography and sonar inverse problems. In this paper, we present a new inversion method for the circular averages transform using the Funk transform. For a function defined on the unit sphere, its Funk transform is given by the integrals of the function along the great circles. We used hyperbolic geometry to establish a diffeomorphism between the circular averages transform, hyperbolic x-ray and Funk transforms. The method is exact and numerically efficient when fast Fourier transforms over the sphere are used. We present numerical simulations to demonstrate the performance of the inversion method. Dedicated to Dennis Healy, a friend of Applied Mathematics and Engineering
Developing Competency in Payroll Procedures
Jackson, Allen L.
1975-01-01
The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)
The modern counting instrumentation has largely obviated the need for separation processes in the radiochemical analysis but problems in low-level radioactivity measurement, environmental-type analyses, and special situations caused in the last years a renaissance of the need for separation techniques. Most of the radiochemical procedures, based on the classic works of the Manhattan Project chemists of the 1940's, were published in the National Nuclear Energy Series (NNES). Improvements such as new solvent extraction and ion exchange separations have been added to these methods throughout the years. Recently the Los Alamos Group have reissued their collected Radiochemical Procedures containing a short summary and review of basic inorganic chemistry - 'Chemistry of the Elements on the Basis of Electronic Configuration'. (A.L.)
Average-cost based robust structural control
Hagood, Nesbitt W.
1993-01-01
A method is presented for the synthesis of robust controllers for linear time invariant structural systems with parameterized uncertainty. The method involves minimizing quantities related to the quadratic cost (H2-norm) averaged over a set of systems described by real parameters such as natural frequencies and modal residues. Bounded average cost is shown to imply stability over the set of systems. Approximations for the exact average are derived and proposed as cost functionals. The properties of these approximate average cost functionals are established. The exact average and approximate average cost functionals are used to derive dynamic controllers which can provide stability robustness. The robustness properties of these controllers are demonstrated in illustrative numerical examples and tested in a simple SISO experiment on the MIT multi-point alignment testbed.
Coherent ensemble averaging techniques for impedance cardiography
Hurwitz, Barry E.; Shyu, Liang-Yu; Reddy, Sridhar P; Schneiderman, Neil; Nagel, Joachim H.
1990-01-01
EKG synchronized ensemble averaging of the impedance cardiogram tends to blur or suppress signal events due to signal jitter or event latency variability. Although ensemble averaging provides some improvement in the stability of the signal and signal to noise ratio under conditions of nonperiodic influences of respiration and motion, coherent averaging techniques were developed to determine whether further enhancement of the impedance cardiogram could be obtained. Physiological signals were o...
MEASUREMENT AND MODELLING AVERAGE PHOTOSYNTHESIS OF MAIZE
ZS LÕKE
2005-01-01
The photosynthesis of fully developed maize was investigated in the Agrometeorological Research Station Keszthely, in 2000. We used LI-6400 type measurement equipment to locate measurement points where the intensity of photosynthesis mostly nears the average. So later we could obtain average photosynthetic activities featuring the crop, with only one measurement. To check average photosynthesis of maize we used Goudriaan’s simulation model (CMSM) as well to calculate values on cloudless sampl...
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc
The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed
Applying industrial engineering practices to radiology.
Rosen, Len
2004-01-01
Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to
A note on generalized averaged Gaussian formulas
Spalevic, Miodrag
2007-11-01
We have recently proposed a very simple numerical method for constructing the averaged Gaussian quadrature formulas. These formulas exist in many more cases than the real positive Gauss?Kronrod formulas. In this note we try to answer whether the averaged Gaussian formulas are an adequate alternative to the corresponding Gauss?Kronrod quadrature formulas, to estimate the remainder term of a Gaussian rule.
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Labour Turnover Costs and Average Labour Demand
Bertola, Giuseppe
1991-01-01
The effect of labour turnover costs on average employment in a partial equilibrium model of labour demand, depends on the form of the revenue function, on the rates of discount and labour attrition, and on the relative size of hiring and firing costs. If discount and attrition rates are strictly positive, firing costs may well increase average employment even when hiring costs reduce it.
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76...) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General provisions. In lieu of complying with the applicable emission limitation in § 76.5, 76.6, or 76.7,...
The Hubble rate in averaged cosmology
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaître-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate and the spatial surface on which the average is performed. We quantitatively study different definitions of the averaged Hubble rate encountered in the literature by consistently calculating the backreaction effect at second order in perturbation theory, and compare the results. We employ for the first time a recently developed gauge-invariant definition of an averaged scalar. We also discuss the variance of the Hubble rate for the different definitions
Average luminosity distance in inhomogeneous universes
Kostov, Valentin
2010-01-01
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, and includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass-compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A form...
The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction
Short-Term Auditory Memory of Above-Average and Below-Average Grade Three Readers.
Caruk, Joan Marie
To determine if performance on short term auditory memory tasks is influenced by reading ability or sex differences, 62 third grade reading students (16 above average boys, 16 above average girls, 16 below average boys, and 14 below average girls) were administered four memory tests--memory for consonant names, memory for words, memory for…
Time averaging of instantaneous quantities in HYDRA
McCallen, R.C.
1996-09-01
For turbulent flow the evaluation of direct numerical simulations (DNS) where all scales are resolved and large-eddy simulation (LES) where only large-scales are resolved is difficult because the results are three-dimensional and transient. To simplify the analysis, the instantaneous flow field can be averaged in time for evaluation and comparison to experimental results. The incompressible Navier-Stokes flow code HYDRA has been modified for calculation of time-average quantities for both DNS and LES. This report describes how time averages of instantaneous quantities are generated during program execution (i.e., while generating the instantaneous quantities, instead of as a postprocessing operation). The calculations are performed during program execution to avoid storing values at each time step and thus to reduce storage requirements. The method used in calculating the time-average velocities, turbulent intensities, <{ital u}{sup ``}{sup 2}>, <{ital va}{sup ``}{sup 2}>, and <{ital w}{sup ``}{sup 2}>, and turbulent shear, <{ital u}{sup ``}{ital v}{sup ``}> are outlined. The brackets <> used here represent a time average. the described averaging methods were implemented in the HYDRA code for three-dimensional problem solutions. Also presented is a method for taking the time averages for a number of consecutive intervals and calculating the time average for the sum of the intervals. This method could be used for code restarts or further postprocessing of the timer averages from consecutive intervals. This method was not used in the HYDRA implementation, but is included here for completeness. In HYDRA, the running sums needed fro time averaging are simply written to the restart dump.
Clarifying the Relationship between Average Excesses and Average Effects of Allele Substitutions.
Alvarez-Castro, José M; Yang, Rong-Cai
2012-01-01
Fisher's concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance. PMID:22509178
Clarifying the relationship between average excesses and average effects of allele substitutions
Jose M eÁlvarez-Castro
2012-03-01
Full Text Available Fisher’s concepts of average effects and average excesses are at the core of the quantitative genetics theory. Their meaning and relationship have regularly been discussed and clarified. Here we develop a generalized set of one-locus two-allele orthogonal contrasts for average excesses and average effects, based on the concept of the effective gene content of alleles. Our developments help understand the average excesses of alleles for the biallelic case. We dissect how average excesses relate to the average effects and to the decomposition of the genetic variance.
From moving averages to anomalous diffusion: a Rényi-entropy approach
Moving averages, also termed convolution filters, are widely applied in science and engineering at large. As moving averages transform inputs to outputs by convolution, they induce correlation. In effect, moving averages are perhaps the most fundamental and ubiquitous mechanism of transforming uncorrelated inputs to correlated outputs. In this paper we study the correlation structure of general moving averages, unveil the Rényi-entropy meaning of a moving-average's overall correlation, address the maximization of this overall correlation, and apply this overall correlation to the dispersion-measurement and to the classification of regular and anomalous diffusion transport processes. (fast track communication)
Self-averaging characteristics of spectral fluctuations
Braun, Petr; Haake, Fritz
2014-01-01
The spectral form factor as well as the two-point correlator of the density of (quasi-)energy levels of individual quantum dynamics are not self-averaging. Only suitable smoothing turns them into useful characteristics of spectra. We present numerical data for a fully chaotic kicked top, employing two types of smoothing: one involves primitives of the spectral correlator, the second a small imaginary part of the quasi-energy. Self-averaging universal (like the CUE average) behavior is found f...
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
Average contraction and synchronization of complex switched networks
This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)
Despeckling vs averaging of retinal UHROCT tomograms: advantages and limitations
Eichel, Justin A.; Lee, Donghyun D.; Wong, Alexander; Fieguth, Paul W.; Clausi, David A.; Bizheva, Kostadinka K.
2011-03-01
Imaging time can be reduced using despeckled tomograms, which have similar image metrics to those obtained by averaging several low speed tomograms or many high speed tomograms. Quantitative analysis was used to compare the performance of two speckle denoising approaches, algorithmic despeckling and frame averaging, as applied to retinal OCT images. Human retinal tomograms were acquired from healthy subjects with a research grade 1060nm spectral domain UHROCT system with 5μm axial resolution in the retina. Single cross-sectional retinal tomograms were processed with a novel speckle denoising algorithm and compared with frame averaged retinal images acquired at the same location. Image quality metrics such as the image SNR and contrast-to-noise ratio (CNR) were evaluated for both cases.
Vibration monitor for rotating machines using average frequency technique
A vibration monitoring technique has been developed which can be applied to continuous monitoring and to patrol checking of many kinds of rotating machines in nuclear power plants. In this method, the vibrating condition in such equipment are represented in terms of two parameters, i.e. a vibration amplitude (RMS value) and an average frequency. The average frequency is defined as the root value of the second moment of the vibration frequency weighted by the power spectrum. The average frequency can be calculated by simple analogue circuits and does not need the spectrum analysis. Using these two parameter, not only the occurrence of abnormal vibration but also the type of vibration can be detected. (author)
40 CFR 600.510-86 - Calculation of average fuel economy.
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...
40 CFR 86.1866-12 - CO2 fleet average credit programs.
2010-07-01
... the test procedure provisions specified in 40 CFR part 600 subpart B and using the calculation... 40 Protection of Environment 19 2010-07-01 2010-07-01 false CO2 fleet average credit programs. 86...-Duty Trucks, and Complete Otto-Cycle Heavy-Duty Vehicles § 86.1866-12 CO2 fleet average credit...
Averaging analysis for discrete time and sampled data adaptive systems
Fu, Li-Chen; Bai, Er-Wei; Sastry, Shankar S.
1986-01-01
Earlier continuous time averaging theorems are extended to the nonlinear discrete time case. Theorems for the study of the convergence analysis of discrete time adaptive identification and control systems are used. Instability theorems are also derived and used for the study of robust stability and instability of adaptive control schemes applied to sampled data systems. As a by product, the effects of sampling on unmodeled dynamics in continuous time systems are also studied.
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1991 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1991 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1999 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1999 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2003 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2003 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Fastest Distributed Consensus Averaging Problem on Chain of Rhombus Networks
Jafarizadeh, Saber
2010-01-01
Distributed consensus has appeared as one of the most important and primary problems in the context of distributed computation and it has received renewed interest in the field of sensor networks (due to recent advances in wireless communications), where solving fastest distributed consensus averaging problem over networks with different topologies is one of the primary problems in this issue. Here in this work analytical solution for the problem of fastest distributed consensus averaging algorithm over Chain of Rhombus networks is provided, where the solution procedure consists of stratification of associated connectivity graph of the network and semidefinite programming, particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions. Also characteristic polynomial together with its roots corresponding to eigenvalues of weight matrix including SLEM of network is determined inductively. Moreover t...
Resonance Averaged Photoionization Cross Sections for Astrophysical Models
Bautista, M A; Pradhan, A K
1997-01-01
We present ground state photoionization cross sections of atoms and ions averaged over resonance structures for photoionization modeling of astrophysical sources. The detailed cross sections calculated in the close-coupling approximation using the R-matrix method, with resonances delineated at thousands of energies, are taken from the Opacity Project database TOPbase and the Iron Project, including new data for the low ionization stages of iron Fe I--V. The resonance-averaged cross sections are obtained by convolving the detailed cross sections with a Gaussian distribution over the autoionizing resonances. This procedure is expected to minimize errors in the derived ionization rates that could result from small uncertainties in computed positions of resonances, while preserving the overall resonant contribution to the cross sections in the important near threshold regions. The detailed photoionization cross sections at low photon energies are complemented by new relativistic distorted-wave calculations for Z1...
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Development of average wages in CR regions
Bejvlová, Jana
2013-01-01
The purpose of this study is to analyse trends in average gross monthly earnings of employees – individuals - in particular regions of the Czech Republic. The analysed time series begin in 2000 as the regions were decisively established on 1st January 2000. Moreover the self-governing competencies were introduced by the Act No. 129/2000 Coll., on Regions (Establishment of Regions). The researched period ends in 2010. Based on model construction of referential sets, the study predicts average ...
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....
Hyperplane Arrangements with Large Average Diameter
Deza, Antoine; Xie, Feng
2007-01-01
The largest possible average diameter of a bounded cell of a simple hyperplane arrangement is conjectured to be not greater than the dimension. We prove that this conjecture holds in dimension 2, and is asymptotically tight in fixed dimension. We give the exact value of the largest possible average diameter for all simple arrangements in dimension 2, for arrangements having at most the dimension plus 2 hyperplanes, and for arrangements having 6 hyperplanes in dimension 3. In dimension 3, we g...
The Hubble rate in averaged cosmology
Umeh, Obinna; Larena, Julien; Clarkson, Chris
2010-01-01
The calculation of the averaged Hubble expansion rate in an averaged perturbed Friedmann-Lemaitre-Robertson-Walker cosmology leads to small corrections to the background value of the expansion rate, which could be important for measuring the Hubble constant from local observations. It also predicts an intrinsic variance associated with the finite scale of any measurement of H_0, the Hubble rate today. Both the mean Hubble rate and its variance depend on both the definition of the Hubble rate ...
Averaging Problem in Cosmology and Macroscopic Gravity
Zalaletdinov, Roustam
2007-01-01
The Averaging problem in general relativity and cosmology is discussed. The approach of macroscopic gravity to resolve the problem is presented. An exact cosmological solution to the equations of macroscopic gravity is given and its properties are discussed. Contents: 1. Introduction to General Relativity 2. General Relativity -> Relativistic Cosmology 3. Introduction to Relativistic Cosmology 4. Relativistic Cosmology -> Mathematical Cosmology 5. Averaging Problem in Relativistic Cosmology 6...