Poirier, M
2009-01-01
The behavior of non-local thermal-equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser-produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, or X-ray sources. The proper description of these media in stationary cases requires to solve linear systems of thousands or more rate equations. A possible simplification for this arduous numerical task may lie in some type of statistical average, such as configuration or superconfiguration average. However to assess the validity of this procedure and to handle cases where isolated lines play an important role, it may be important to deal with detailed levels systems. This involves matrices with sometimes billions of elements, which are rather sparse but still involve thousands of diagonals. We propose here a numerical algorithm based on the LU decomposition for such linear systems. This method turns out to be orders of magnitude faster than the traditional Gauss elimination. And at variance with ...
A procedure to average 3D anatomical structures.
Subramanya, K; Dean, D
2000-12-01
Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.
Applying Modeling Tools to Ground System Procedures
Di Pasquale, Peter
2012-01-01
As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.
Effects of measurement procedure and equipment on average room acoustic measurements
DEFF Research Database (Denmark)
Gade, Anders Christian; Bradley, J S; Siebein, G W
1993-01-01
. In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences observed...
42 CFR 431.708 - Procedures for applying standards.
2010-10-01
... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE ORGANIZATION AND GENERAL ADMINISTRATION State Programs for Licensing Nursing Home Administrators § 431.708 Procedures for applying standards. The...
Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.
2007-12-01
We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].
Directory of Open Access Journals (Sweden)
C. O'Brien
2007-01-01
Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.
An Exact Procedure for the Evaluation of Reference-Scaled Average Bioequivalence.
Tothfalusi, Laszlo; Endrenyi, Laszlo
2016-03-01
Reference-scaled average bioequivalence (RSABE) has been recommended by Food and Drug Administration (FDA), and in its closely related form by European Medicines Agency (EMA), for the determination of bioequivalence (BE) of highly variable (HV) and narrow therapeutic index (NTI) drug products. FDA suggested that RSABE be evaluated by an approximating procedure. Development of an alternative, numerically exact approach was sought. A new algorithm, called Exact, was derived for the assessment of RSABE. It is based upon the observation that the statistical model of RSABE follows a noncentral t distribution. The parameters of the distribution were derived for crossover and parallel-group study designs. Simulated BE studies of HV and NTI drugs compared the power and consumer risk of the proposed Exact method with those recommended by FDA and EMA. The Exact method had generally slightly higher power than the FDA approach. The consumer risks of the Exact and FDA procedures were generally below the nominal error risk with both methods except for the partial replicate design under certain heteroscedastic conditions. The estimator of RSABE was biased; simulations demonstrated the appropriateness of Hedges' correction. The FDA approach had another, small but meaningful bias. The confidence intervals of RSABE, based on the derived exact, analytical formulas, are uniformly most powerful. Their computation requires in standard cases only a single-line program script. The algorithm assumes that the estimates of the within-subject variances of both formulations are available. With each algorithm, the consumer risk is higher than 5% when the partial replicate design is applied.
Reevaluation of USAFSAM Sampling and Data-Averaging Procedures for Respirator Quantitative Fit Test.
1984-06-01
other than In connection with a definitely Government-related procure- ment, the United States Government incurs no responsibility nor any obligation...CED!JRES _________________FOR RESPIRATOR QUNTITATIVE PIT TEST . PSCNAL AUj~r-iORIS Laird, A. Rachel, Ph.D. Iea r PF OF REPORT 13b. TIME COVERED 14...sections show that part of the discrepancy is due to an incorrect procedure and part to differences in the definition of average PF. An alternate method
Differential item functioning analysis by applying multiple comparison procedures.
Eusebi, Paolo; Kreiner, Svend
2015-01-01
Analysis within a Rasch measurement framework aims at development of valid and objective test score. One requirement of both validity and objectivity is that items do not show evidence of differential item functioning (DIF). A number of procedures exist for the assessment of DIF including those based on analysis of contingency tables by Mantel-Haenszel tests and partial gamma coefficients. The aim of this paper is to illustrate Multiple Comparison Procedures (MCP) for analysis of DIF relative to a variable defining a very large number of groups, with an unclear ordering with respect to the DIF effect. We propose a single step procedure controlling the false discovery rate for DIF detection. The procedure applies for both dichotomous and polytomous items. In addition to providing evidence against a hypothesis of no DIF, the procedure also provides information on subset of groups that are homogeneous with respect to the DIF effect. A stepwise MCP procedure for this purpose is also introduced.
A procedure for Applying a Maturity Model to Process Improvement
Directory of Open Access Journals (Sweden)
Elizabeth Pérez Mergarejo
2014-09-01
Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.
Robust numerical methods for conservation laws using a biased averaging procedure
Choi, Hwajeong
In this thesis, we introduce a new biased averaging procedure (BAP) and use it in developing high resolution schemes for conservation laws. Systems of conservation laws arise in variety of physical problems, such as the Euler equation of compressible flows, magnetohydrodynamics, multicomponent flows, the blast waves and the flow of glaciers. Many modern shock capturing schemes are based on solution reconstructions by high order polynomial interpolations, and time evolution by the solutions of Riemann problems. Due to the existence of discontinuities in the solution, the interpolating polynomial has to be carefully constructed to avoid possible oscillations near discontinuities. The BAP is a more general and simpler way to approximate higher order derivatives of given data without introducing oscillations, compared to limiters and the essentially non-oscillatory interpolations. For the solution of a system of conservation laws, we present a finite volume method which employs a flux splitting and uses componentwise reconstruction of the upwind fluxes. A high order piecewise polynomial constructed by using BAP is used to approximate the component of upwind fluxes. This scheme does not require characteristic decomposition nor Riemann solver, offering easy implementation and a relatively small computational cost. More importantly, the BAP is naturally extended for unstructured grids and it will be demonstrated through a cell-centered finite volume method, along with adaptive mesh refinement. A number of numerical experiments from various applications demonstrates the robustness and the accuracy of this approach, and show the potential of this approach for other practical applications.
Depth Averaged Equations Applied To Study of Defense Structures Effects On Dense Avalanche Flows
Naaim, M.; Bouvet-Naaim, F.; Faug, T.; Lachamp, P.
Avalanche zoning and protection devices are the complementary tools used to assess avalanche risk and protect persons and human activities in mountainous areas. Despite the intensive use of defense structures as protection against avalanches, their hydraulic and structural effects are not well known. Many structures were designed empirically using expert knowledge or knowledge developed in other domain such as hydraulic. Defence structures effects in terms of energy dissipation, deviation and snow retention are difficult to study in situ. The cost and difficulties of experiments, the danger and the weak annual number of avalanches in a given site, are the reasons why scientists oriented their research towards the use of numerical or laboratory physical models. This paper presents and discuss the possibilities to use depth averaged equations to study dense avalanche flows around defence structures. The used numerical resolu- tion method is based on an upwind numerical scheme. Equations are integrated on each cell of the mesh and the numerical fluxes are calculated thanks to a simplified Riemann solver where the retained solution is obtained as a combination of shock and rarefaction founctions. This allows taking into account the topography variation and jets and surges presence. These two characteristics are needed because both exper- imental and in situ observations showed a significant topography modifications and jets and surges formations during interaction between avalanche flows and structures. The case of vertical surfaces such as those made of concrete destined to deviate flows are treated by appropriated boundary condition functions. A discussion about the best way to integrate defence structures in such model is presented and discussed. This modelisation has, in a first time, been tested on analytical solutions and on experimen- tal laboratory scale model results. These tests have shown the capacity of this model, despite the strong hypothesis, to
Prioritizing Policies for Pro-Poor Growth : Applying Bayesian Model Averaging to Vietnam
Klump, R.; Prüfer, P.
2006-01-01
Pro-Poor Growth (PPG) is the vision of combining high growth rates with poverty reduction.Due to the myriad of possible determinants of growth and poverty a unique theoretical model for guiding empirical work on PPG is absent, though.Bayesian Model Averaging is a statistically robust framework for t
1990-11-01
findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate
A unified framework for benchmark dose estimation applied to mixed models and model averaging
DEFF Research Database (Denmark)
Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.
2013-01-01
This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...
Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks
Directory of Open Access Journals (Sweden)
Shen-Chun Wu
2003-01-01
Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.
Mulac, Richard A.; Schneider, Jon C.; Adamczyk, John J.
1989-01-01
Counter-rotating propfan (CRP) propulsion technologies are currently being evaluated as cruise missile propulsion systems. The aerodynamic integration concerns associated with this application are being addressed through the computational modeling of the missile body-propfan flowfield interactions. The work described in this paper consists of a detailed analysis of the aerodynamic interactions between the control surfaces and the propfan blades through the solution of the average-passage equation system. Two baseline configurations were studied, the control fins mounted forward of the counter-rotating propeller and the control fins mounted aft of the counter-rotating propeller. In both cases, control fin-propfan separation distance and control fin deflection angle were varied.
Averaged Stokes polarimetry applied to evaluate retardance and flicker in PA-LCoS devices.
Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Ortuño, Manuel; Francés, Jorge; Beléndez, Augusto; Pascual, Inmaculada
2014-06-16
Recently we proposed a novel polarimetric method, based on Stokes polarimetry, enabling the characterization of the linear retardance and its flicker amplitude in electro-optic devices behaving as variable linear retarders. In this work we apply extensively the technique to parallel-aligned liquid crystal on silicon devices (PA-LCoS) under the most typical working conditions. As a previous step we provide some experimental analysis to delimitate the robustness of the technique dealing with its repeatability and its reproducibility. Then we analyze the dependencies of retardance and flicker for different digital sequence formats and for a wide variety of working geometries.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
A bidirectional coupling procedure applied to multiscale respiratory modeling
Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton's method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling☆
Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.
2012-01-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple
A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.
Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R
2013-07-01
In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets
Energy Technology Data Exchange (ETDEWEB)
Yang, Hongwei [APEC Climate Center, Busan (Korea, Republic of); Wang, Bin [University of Hawaii at Manoa, Department of Meteorology, Honolulu, HI (United States); University of Hawaii at Manoa, International Pacific Research Center, Honolulu, HI (United States); Wang, Bin [Chinese Academy of Sciences, LASG, Institute of Atmospheric Physics, Beijing (China)
2012-11-15
Reduction of uncertainty in large-scale lateral-boundary forcing in regional climate modeling is a critical issue for improving the performance of regional climate downscaling. Numerical simulations of 1998 East Asian summer monsoon were conducted using the Weather Research and Forecast model forced by four different reanalysis datasets, their equal-weight ensemble, and Bayesian model averaging (BMA) ensemble means. Large discrepancies were found among experiments forced by the four individual reanalysis datasets mainly due to the uncertainties in the moisture field of large-scale forcing over ocean. We used satellite water-vapor-path data as observed truth-and-training data to determine the posterior probability (weight) for each forcing dataset using the BMA method. The experiment forced by the equal-weight ensemble reduced the circulation biases significantly but reduced the precipitation biases only moderately. However, the experiment forced by the BMA ensemble outperformed not only the experiments forced by individual reanalysis datasets but also the equal-weight ensemble experiment in simulating the seasonal mean circulation and precipitation. These results suggest that the BMA ensemble method is an effective method for reducing the uncertainties in lateral-boundary forcing and improving model performance in regional climate downscaling. (orig.)
Scholz, Roland W; Hansmann, Ralf
2007-02-01
Expert panels and averaging procedures are common means for coping with the uncertainty of effects of technology application in complex environments. We investigate the connection between confidence and the validity of expert judgment. Moreover, a formative consensus building procedure (FCB) is introduced that generates probability statements on the performance of technologies, and we compare different algorithms for the statistical aggregation of individual judgments. The case study refers to an expert panel of 10 environmental scientists assessing the performance of a soil cleanup technology that uses the capability of certain plants to accumulate heavy metals from the soil in the plant body (phytoremediation). The panel members first provided individual statements on the effectiveness of a phytoremediation. Such statements can support policymakers, answering the questions concerning the expected performance of the new technology in contaminated areas. The present study reviews (1) the steps of the FCB, (2) the constraints of technology application (contaminants, soil structure, etc.), (3) the measurement of expert knowledge, (4) the statistical averaging and the discursive agreement procedures, and (5) the boundaries of application for the FCB method. The quantitative statement oriented part of FCB generates terms such as: "The probability that the concentration of soil contamination will be reduced by at least 50% is 0.8." The data suggest that taking the median of the individual expert estimates provides the most accurate aggregated estimate. The discursive agreement procedure of FCB appears suitable for deriving politically relevant singular statements rather than for obtaining comprehensive information about uncertainties as represented by probability distributions.
49 CFR 40.383 - What procedures apply if you contest the issuance of a PIE?
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures apply if you contest the issuance of a PIE? 40.383 Section 40.383 Transportation Office of the Secretary of Transportation PROCEDURES... What procedures apply if you contest the issuance of a PIE? (a) DOT conducts PIE proceedings in a...
34 CFR 370.43 - What requirement applies to the use of mediation procedures?
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures...
Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process
Motley, Albert E., III
2000-01-01
One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.
Collignan, Bernard; Powaga, Emilie
2014-11-01
Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings.
21 CFR 1315.22 - Procedure for applying for individual manufacturing quotas.
2010-04-01
... manufacturing quotas. 1315.22 Section 1315.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF... Individual Manufacturing Quotas § 1315.22 Procedure for applying for individual manufacturing quotas. Any... desires to manufacture a quantity of the chemical must apply on DEA Form 189 for a manufacturing quota...
Espinar, B.; Blanc, P.; Wald, L.; Hoyer-Klick, C.; Schroedter-Homscheidt, M.; Wanderer, T.
2012-04-01
Meteorological data measured by ground stations are often a key element in the development and validation of methods exploiting satellite images. These data are considered as a reference against which satellite-derived estimates are compared. Long-term radiation and meteorological measurements are available from a large number of measuring stations. However, close examination of the data often reveals a lack of quality, often for extended periods of time. This lack of quality has been the reason, in many cases, of the rejection of large amount of available data. The quality data must be checked before their use in order to guarantee the inputs for the methods used in modelling, monitoring, forecast, etc. To control their quality, data should be submitted to several conditions or tests. After this checking, data that are not flagged by any of the test is released as a plausible data. In this work, it has been performed a bibliographical research of quality control tests for the common meteorological variables (ambient temperature, relative humidity and wind speed) and for the usual solar radiometrical variables (horizontal global and diffuse components of the solar radiation and the beam normal component). The different tests have been grouped according to the variable and the average time period (sub-hourly, hourly, daily and monthly averages). The quality test may be classified as follows: • Range checks: test that verify values are within a specific range. There are two types of range checks, those based on extrema and those based on rare observations. • Step check: test aimed at detecting unrealistic jumps or stagnation in the time series. • Consistency checks: test that verify the relationship between two or more time series. The gathered quality tests are applicable for all latitudes as they have not been optimized regionally nor seasonably with the aim of being generic. They have been applied to ground measurements in several geographic locations, what
18 CFR 284.502 - Procedures for applying for market-based rates.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Procedures for applying for market-based rates. 284.502 Section 284.502 Conservation of Power and Water Resources FEDERAL... POLICY ACT OF 1978 AND RELATED AUTHORITIES Applications for Market-Based Rates for Storage §...
21 CFR 1303.22 - Procedure for applying for individual manufacturing quotas.
2010-04-01
... manufacturing quotas. 1303.22 Section 1303.22 Food and Drugs DRUG ENFORCEMENT ADMINISTRATION, DEPARTMENT OF JUSTICE QUOTAS Individual Manufacturing Quotas § 1303.22 Procedure for applying for individual manufacturing quotas. Any person who is registered to manufacture any basic class of controlled substance...
34 CFR 222.158 - What procedures apply to the Secretary's review of an initial decision?
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false What procedures apply to the Secretary's review of an initial decision? 222.158 Section 222.158 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION IMPACT AID PROGRAMS...
14 CFR 382.127 - What procedures apply to stowage of battery-powered mobility aids?
2010-01-01
...-powered mobility aids? 382.127 Section 382.127 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT... DISABILITY IN AIR TRAVEL Stowage of Wheelchairs, Other Mobility Aids, and Other Assistive Devices § 382.127 What procedures apply to stowage of battery-powered mobility aids? (a) Whenever baggage...
13 CFR 124.1010 - What procedures apply to disadvantaged status protests?
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What procedures apply to disadvantaged status protests? 124.1010 Section 124.1010 Business Credit and Assistance SMALL BUSINESS..., Certification, and Protests Relating to Federal Small Disadvantaged Business Programs § 124.1010 What...
Computational Comminution and Its Key Technologies Applied to Materials Processing Procedure
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A new concept named computational comminution is proposed in this paper, which is different from the traditional studies on materials processing procedure such as the study based on theoretic models, the study based on experiment models, which is based on information models. Some key technologies applied to materials processing procedure such as artificial neural networks, fuzzy sets, genetic algorithms and visualization technology are also presented, and a fusing methodology of these new technologies is studied. Application in the cement grinding process of Horomill shows that results in this paper are efficient.
Energy Technology Data Exchange (ETDEWEB)
Maghraby, Ahmed M., E-mail: maghrabism@yahoo.com [National Institute of Standards (NIS), Radiation Dosimetry Department, Ministry of Scientific Research, Tersa Street, P.O. Box 136, Giza, Haram 12211 (Egypt); Physics Department, Faculty of Science and Humanities, Salman Bin AbdulAziz University, Alkharj (Saudi Arabia)
2014-02-11
Alanine/EPR is the most common dosimetry system for high radiation doses because of its high stability and wide linear response, however, use of alanine in most of medical applications still require special sophisticated methodologies and techniques in order to extend alanine detection limit to low levels of radiation doses. One of these techniques is the use of digital processing of acquired alanine spectra for enhancing useful components in spectra while useless features are suppressed. Simple moving average filter (MA) impacts on alanine EPR spectra have been studied in terms of peak-to-peak height, peak-to-peak line width, and associated uncertainty. Three types of the used filter were investigated: upward MA, central MA, and downward MA filters, effects of each on the peak position for different values of filter width were studied. It was found that MA filter always lead to the reduction in signal intensity and the increase of line width of the central peak of alanine spectrum. Peak position also changes in cases of the upward MA and downward MA filters while no significant changes were observed in the case of central MA. Uncertainties associated to the averaging process were evaluated and plotted versus the filter width resulting in a linear relationship. Filter width value should be carefully selected in order to avoid probable distortion in processed spectra while gaining less noisy spectra with less associated uncertainties.
Applying procedural justice theory to law enforcement's response to persons with mental illness.
Watson, Amy C; Angell, Beth
2007-06-01
Procedural justice provides a framework for considering how persons with mental illness experience interactions with the police and how officer behaviors may shape cooperation or resistance. The procedural justice perspective holds that the fairness with which people are treated in an encounter with authority figures (such as the police) influences whether they cooperate or resist authority. Key components of a procedural justice framework include participation (having a voice), which involves having the opportunity to present one's own side of the dispute and be heard by the decision maker; dignity, which includes being treated with respect and politeness and having one's rights acknowledged; and trust that the authority is concerned with one's welfare. Procedural justice has its greatest impact early in the encounter, suggesting that how officers initially approach someone is extremely important. Persons with mental illness may be particularly attentive to how they are treated by police. According to this framework, people who are uncertain about their status (such as members of stigmatized groups) will respond most strongly to the fairness by which police exercise their authority. This article reviews the literature on police response to persons with mental illness. Procedural justice theory as it has been applied to mental health and justice system contexts is examined. Its application to encounters between police and persons with mental illness is discussed. Implications and cautions for efforts to improve police response to persons with mental illness and future research also are examined.
Energy Technology Data Exchange (ETDEWEB)
Tari, H., E-mail: tari.1@osu.edu; Scheidler, J.J., E-mail: scheidler.8@osu.edu; Dapino, M.J., E-mail: dapino.1@osu.edu
2015-06-15
A reformulation of the Discrete Energy-Averaged model for the calculation of 3D hysteretic magnetization and magnetostriction of iron-gallium (Galfenol) alloys is presented in this paper. An analytical solution procedure based on an eigenvalue decomposition is developed. This procedure avoids the singularities present in the existing approximate solution by offering multiple local minimum energy directions for each easy crystallographic direction. This improved robustness is crucial for use in finite element codes. Analytical simplifications of the 3D model to 2D and 1D applications are also presented. In particular, the 1D model requires calculation for only one easy direction, while all six easy directions must be considered for general applications. Compared to the approximate solution procedure, it is shown that the resulting robustness comes at no expense for 1D applications, but requires almost twice the computational effort for 3D applications. To find model parameters, we employ the average of the hysteretic data, rather than anhysteretic curves, which would require additional measurements. An efficient optimization routine is developed that retains the dimensionality of the prior art. The routine decouples the parameters into exclusive sets, some of which are found directly through a fast preprocessing step to improve accuracy and computational efficiency. The effectiveness of the model is verified by comparison with existing measurement data. - Highlights: • The discrete energy-averaged model for Galfenol is reformulated. • An analytical solution for 3D magnetostriction and magnetization is developed from eigenvalue decomposition. • Improved robustness is achieved. • An efficient optimization routine is developed to identify parameters from averaged hysteresis curves. • The effectiveness of the model is demonstrated against experimental data.
Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas
Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.
In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.
A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics
DEFF Research Database (Denmark)
Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist;
2015-01-01
exercise, thereby bypassing the challenging task of model structure determination and identification. Parameter identification problems can thus lead to ill-calibrated models with low predictive power and large model uncertainty. Every calibration exercise should therefore be precededby a proper model...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...
Coates, P J; d'Ardenne, A J; Khan, G; Kangro, H O; Slavin, G
1991-02-01
The polymerase chain reaction was applied to the analysis of DNA contained in archival paraffin wax embedded material. DNA suitable for the reaction was obtained from these tissues by simple extraction methods, without previous dewaxing of tissue sections. When compared with unfixed material, the reaction efficiency was compromised, so that an increased number of amplification cycles were required to produce equivalent amounts of amplified product. This in turn led to an increase in amplification artefacts, which can be minimised by a simple modification of the standard reaction. Amplification of relatively large DNA fragments was not always successful, and it seems prudent to bear this in mind when designing oligonucleotide primers which are to be used for the amplification of archival material. The efficiency of the procedure can be improved by dividing the amplification cycles into two parts: this reduces the amount of reagent needed, is relatively simple and inexpensive, and can be performed in one working day.
23 CFR 636.210 - What requirements apply to projects which use the modified design-build procedure?
2010-04-01
... modified design-build procedure? 636.210 Section 636.210 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS DESIGN-BUILD CONTRACTING Selection Procedures, Award Criteria § 636.210 What requirements apply to projects which use the modified...
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
A diagnostic procedure for applying the social-ecological systems framework in diverse cases
Directory of Open Access Journals (Sweden)
Jochen Hinkel
2015-03-01
Full Text Available The framework for analyzing sustainability of social-ecological systems (SES framework of Elinor Ostrom is a multitier collection of concepts and variables that have proven to be relevant for understanding outcomes in diverse SES. The first tier of this framework includes the concepts resource system (RS and resource units (RU, which are then further characterized through lower tier variables such as clarity of system boundaries and mobility. The long-term goal of framework development is to derive conclusions about which combinations of variables explain outcomes across diverse types of SES. This will only be possible if the concepts and variables of the framework can be made operational unambiguously for the different types of SES, which, however, remains a challenge. Reasons for this are that case studies examine other types of RS than those for which the framework has been developed or consider RS for which different actors obtain different kinds of RU. We explore these difficulties and relate them to antecedent work on common-pool resources and public goods. We propose a diagnostic procedure which resolves some of these difficulties by establishing a sequence of questions that facilitate the step-wise and unambiguous application of the SES framework to a given case. The questions relate to the actors benefiting from the SES, the collective goods involved in the generation of those benefits, and the action situations in which the collective goods are provided and appropriated. We illustrate the diagnostic procedure for four case studies in the context of irrigated agriculture in New Mexico, common property meadows in the Swiss Alps, recreational fishery in Germany, and energy regions in Austria. We conclude that the current SES framework has limitations when applied to complex, multiuse SES, because it does not sufficiently capture the actor interdependencies introduced through RS and RU characteristics and dynamics.
Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.
2013-01-01
Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 79.6 Section 79.6 Education Office of the Secretary, Department of Education INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF EDUCATION PROGRAMS AND ACTIVITIES § 79.6 What...
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false What procedures apply for issuing or appealing an administrative law judge's decision? 222.157 Section 222.157 Education Regulations of the Offices of the Department of Education OFFICE OF ELEMENTARY AND SECONDARY EDUCATION, DEPARTMENT OF EDUCATION IMPACT...
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Do the same accountability and control procedures described above apply to Federal property? 900.58 Section 900.58 Indians BUREAU OF INDIAN AFFAIRS... Organization Management Systems Property Management System Standards § 900.58 Do the same accountability...
Skinner, Christopher H.; McCleary, Daniel F.; Skolits, Gary L.; Poncy, Brian C.; Cates, Gary L.
2013-01-01
The success of Response-to-Intervention (RTI) and similar models of service delivery is dependent on educators being able to apply effective and efficient remedial procedures. In the process of implementing problem-solving RTI models, school psychologists have an opportunity to contribute to and enhance the quality of our remedial-procedure…
D'Amico, Moreno; D'Amico, Gabriele; Paniccia, Michele; Roncoletta, Piero; Vallasciani, Massimo
2010-01-01
Spine and posture disorders cover large interest in rehabilitation. Quantitative functional evaluation represents the main goal in Movement/Gait analysis. However very few studies outline the behaviour of spine during Posture and Movement/Gait analysis. To overcome such limits, several years ago our group started, a project to transfer into a complete fully 3D reliable and detailed representation, different segmental biomechanical models presented in literature. As result a complete 3D parametric biomechanical skeleton model has been developed to be used in quantitative analysis. Posture and Movement/Gait analysis are performed by 3D Opto-electronic stereophotogrammetric measurements of body landmarks labelled by passive markers. Depending on different analysis purposes, the model can work at different stages of complexity. Examples on the application of such model into biomechanical and clinical fields have been presented in literature. Our group is continuously working to add new features to such model, which is now able to fully integrate data deriving from force platforms, SEMG, foot pressure maps. By means of data fusion and optimisation procedures all these inputs are used in the model to assess lower limbs internal joint forces, torques and muscular efforts. The possibility to compute the average of cyclic or repetitive tasks has been included as well. Recently we added the possibility to assess internal joint forces and torques at each spine vertebral level and to correlate these latter with all the other model's features. The aim of this study is to present the methodological aspects of such new features and their potential applicability in clinical and biomechanical fields.
Fokkinga, Wietske A; van Uchelen, Judith; Witter, Dick J; Mulder, Jan; Creugers, Nico H J
2016-01-01
This pilot study analyzed impression procedures for conventional metal frame removable partial dentures (RPDs). Heads of RPD departments of three dental laboratories were asked to record features of all incoming impressions for RPDs during a 2-month period. Records included: (1) impression procedure, tray type (stock/custom), impression material (elastomer/alginate), use of border-molding material (yes/no); and (2) RPD type requested (distal-extension/tooth-bounded/combination). Of the 132 total RPD impressions, 111 (84%) involved custom trays, of which 73 (55%) were combined with an elastomer. Impression border-molding material was used in 4% of the cases. Associations between impression procedure and RPD type or dentists' year/university of graduation were not found.
Applying a Systemic Procedure to Locate Career Decision-Making Difficulties
Gati, Itamar; Amir, Tamar
2010-01-01
Locating clients' career decision-making difficulties is one of the first steps in career counseling. The authors demonstrate the feasibility and utility of a systematic 4-stage procedure for locating and interpreting career decision-making difficulties by analyzing responses of 626 college students (collected by Tai, 2007) to the Career…
Calculation of the information content of retrieval procedures applied to mass spectral data bases
Marlen, G. van; Dijkstra, Auke; Klooster, H.A. van 't
1979-01-01
A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity o
Martino, K G; Marks, B P
2007-12-01
Two different microbial modeling procedures were compared and validated against independent data for Listeria monocytogenes growth. The most generally used method is two consecutive regressions: growth parameters are estimated from a primary regression of microbial counts, and a secondary regression relates the growth parameters to experimental conditions. A global regression is an alternative method in which the primary and secondary models are combined, giving a direct relationship between experimental factors and microbial counts. The Gompertz equation was the primary model, and a response surface model was the secondary model. Independent data from meat and poultry products were used to validate the modeling procedures. The global regression yielded the lower standard errors of calibration, 0.95 log CFU/ml for aerobic and 1.21 log CFU/ml for anaerobic conditions. The two-step procedure yielded errors of 1.35 log CFU/ml for aerobic and 1.62 log CFU/ ml for anaerobic conditions. For food products, the global regression was more robust than the two-step procedure for 65% of the cases studied. The robustness index for the global regression ranged from 0.27 (performed better than expected) to 2.60. For the two-step method, the robustness index ranged from 0.42 to 3.88. The predictions were overestimated (fail safe) in more than 50% of the cases using the global regression and in more than 70% of the cases using the two-step regression. Overall, the global regression performed better than the two-step procedure for this specific application.
Directory of Open Access Journals (Sweden)
Claudia Barroso Krause
2012-06-01
Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.
Energy Technology Data Exchange (ETDEWEB)
Yao, Chih-Kai [Department of Materials Science and Engineering, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China); Liao, Jiunn-Der, E-mail: jdliao@mail.ncku.edu.tw [Department of Materials Science and Engineering, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China); Center of Micro/Nano Science and Technology, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China); Chung, Chia-Wei; Sung, Wei-I. [Department of Materials Science and Engineering, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China); Chang, Nai-Jen [Institute of Biomedical Engineering, National Cheng Kung University, No. 1, University Road, Tainan 70101, Taiwan (China)
2012-12-01
Highlights: Black-Right-Pointing-Pointer Polymeric scaffolds, made from chitosan-based films fixed by chemical (citrate) or natural method (genipin), were developed. Black-Right-Pointing-Pointer Nano-indentation with a constant harmonic frequency was applied on porous scaffolds to explore their surface mechanics. Black-Right-Pointing-Pointer The relationship between surface mechanical property and cell-surface interactions of scaffold materials was demonstrated. Black-Right-Pointing-Pointer Porous scaffolds cross-linked by genipin showed adequate cell affinity, non-toxicity, and suitable mechanical properties. - Abstract: Porous chitosan scaffold is used for tissue engineering and drug delivery, but is limited as a scaffold material due to its mechanical weakness, which restrains cell adhesion on the surface. In this study, a chemical reagent (citrate) and a natural reagent (genipin) are used as cross-linkers for the formation of chitosan-based films. Nanoindentation technique with a continuous stiffness measurement system is particularly applied on the porous scaffold surface to examine the characteristic modulus and nanohardness of a porous scaffold surface. The characteristic modulus of a genipin-cross-linked chitosan surface is Almost-Equal-To 2.325 GPa, which is significantly higher than that of an uncross-linked one ( Almost-Equal-To 1.292 GPa). The cell-scaffold surface interaction is assessed. The cell morphology and results of an MTS assay of 3T3-fibroblast cells of a genipin-cross-linked chitosan surface indicate that the enhancement of mechanical properties induced cell adhesion and proliferation on the modified porous scaffold surface. The pore size and mechanical properties of porous chitosan film can be tuned for specific applications such as tissue regeneration.
Energy Technology Data Exchange (ETDEWEB)
Beedgen, R.
1988-03-01
The computer program PROSA (PROgram for Statistical Analysis of near-real-time accountancy data) was developed as a tool to apply statistical test procedures to a sequence of materials balance results for detecting losses of material. First applications of PROSA to model facility data and real plant data showed that PROSA is also usable as a tool for process or measurement control. To deepen the experience for the application of PROSA to real data of bulk-handling facilities, we applied it to uranium data of the Allied General Nuclear Services miniruns, where accountancy data were collected on a near-real-time basis. Minirun 6 especially was considered, and the pulsed columns were chosen as materials balance area. The structure of the measurement models for flow sheet data and actual operation data are compared, and methods are studied to reduce the error for inventory measurements of the columns.
Energy Technology Data Exchange (ETDEWEB)
Kim, Kwang Hoon; Kim, Kwan Soo; Park, Soon Yeon [Greenpia Technology Inc., Yeojoo (Korea, Republic of); Lee, Young Keun [Korea Atomic Energy Research Institute, Jeongeup (Korea, Republic of); Yook, Hong Sun [Chungnam National University, Daejeon (Korea, Republic of)
2009-06-15
Recently, with new radiation technology being developed and used in advanced industries, the business opportunity of radiation processing has been increasing. For the industrial application of developed products, it is required to review scientific and technical aspects of standard procedures applied to radiation processes. Standard procedures describe requirements of products manufactured under standard processing conditions. In fields related to the operation control of the multi-purpose radiation processing facilities, the ISO 11137 and Codex stan-106 are famous standards adopted as national standards in the advanced countries. The ISO 11137 is applied to supply criteria of medical devices for the validation and routine control of radiation sterilization including variability and uncertainty of dosimetry systems. Korean national standards on the food irradiation are significantly different from Codex stan-106 in parts such as the labelling. Therefore, prior to implementation of the labelling on the labelling on irradiated foods starting from year 2010, it is necessary to revise the inconsistent labelling to the reasonable level of international standard for the promotion and reenforcement of competition in industries using radiation processing technology.
Energy Technology Data Exchange (ETDEWEB)
Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2015-10-15
Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)
Energy Technology Data Exchange (ETDEWEB)
Sieres, Jaime; Fernandez-Seara, Jose [University of Vigo, Area de Maquinas y Motores Termicos, E.T.S. de Ingenieros Industriales, Vigo (Spain)
2008-08-15
The ammonia purification process is critical in ammonia-water absorption refrigeration systems. In this paper, a detailed and a simplified analytical model are presented to characterize the performance of the ammonia rectification process in packed columns. The detailed model is based on mass and energy balances and simultaneous heat and mass transfer equations. The simplified model is derived and compared with the detailed model. The range of applicability of the simplified model is determined. A calculation procedure based on the simplified model is developed to determine the volumetric mass transfer coefficients in the vapour phase from experimental data. Finally, the proposed model and other simple calculation methods found in the general literature are compared. (orig.)
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Energy Technology Data Exchange (ETDEWEB)
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
2010-04-01
... information about Job Corps students and program activities? 670.965 Section 670.965 Employees' Benefits... information about Job Corps students and program activities? (a) The Secretary develops procedures to respond to requests for information or records or other necessary disclosures pertaining to students. (b)...
Burgerhof, Johannes G M; Vasluian, Ecaterina; Dijkstra, Pieter U; Bongers, Raoul M; van der Sluis, Corry K
2016-01-01
STUDY DESIGN: Cross-sectional. INTRODUCTION: Southampton Hand Assessment Procedure (SHAP) provides function scores for hand grips (prehensile patterns) and an overall score, the index of function (IOF). The underlying equations of SHAP are not publicly available, which induces opacity. Furthermore,
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
Wittenberg, P; Sever, K; Knoth, S; Sahin, N; Bondarenko, J
2013-01-01
Due to substantial progress made in road safety in the last ten years, the European Union (EU) renewed the ambitious agreement of halving the number of persons killed on the roads within the next decade. In this paper we develop a method that aims at finding an optimal target for each nation, in terms of being as achievable as possible, and with the cumulative EU target being reached. Targets as an important component in road safety policy are given as reduction rate or as absolute number of road traffic deaths. Determination of these quantitative road safety targets (QRST) is done by a top-down approach, formalized in a multi-stage adjustment procedure. Different QRST are derived under consideration of recent research. The paper presents a method to break the national target further down to regional targets in case of the German Federal States. Generalized linear models are fitted to data in the period 1991-2010. Our model selection procedure chooses various models for the EU and solely log-linear models for the German Federal States. If the proposed targets for the EU Member States are attained, the sum of fatalities should not exceed the total value of 15,465 per year by 2020. Both, the mean level and the range of mortality rates within the EU could be lowered from 28-113 in 2010 to 17-41 per million inhabitants in 2020. This study provides an alternative to the determination of safety targets by political commitments only, taking the history of road fatalities trends and population into consideration.
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Energy Technology Data Exchange (ETDEWEB)
Poirier, M.R.
2001-06-04
This report, addresses fundamentals of flocculation processes shedding light on why WSRC researchers have not been able to report the discovery of a successful flocculant and acceptable filtration rates. It also underscores the importance of applying an optimized flocculation-testing regime, which has not been adopted by these researchers. The final part of the report proposes a research scheme which should lead to a successful choice of flocculants, filtration aids (surfactants) and a filtration regime, as well recommendations for work that should be carried out to make up for the deficiencies of the limited WSRC work where a better performance should be the outcome.
Directory of Open Access Journals (Sweden)
Hahn Sabine
2010-11-01
Full Text Available Abstract Background Standardised translation and cross-cultural adaptation (TCCA procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO based on the existing TCCA procedure for patient-reported outcomes. The TCCA-OAO procedure was applied to develop a German version of the CAHAI (CAHAI-G. Inter-rater reliability of the CAHAI-G was determined through video rating of CAHAI-G. Validity evaluation of the CAHAI-G was assessed using the Chedoke-McMaster Stroke Assessment (CMSA. All ratings were performed by trained, independent raters. In a cross-sectional study, patients were tested within 31 hours after the initial CAHAI-G scoring, for their motor function level using the subscales for arm and hand of the CMSA. Inpatients and outpatients of the occupational therapy department who experienced a cerebrovascular accident or an intracerebral haemorrhage were included. Results Performance of 23 patients (mean age 69.4, SD 12.9; six females; mean time since stroke onset: 1.5 years, SD 2.5 years have been assessed. A high inter-rater reliability was calculated with ICCs for 4 CAHAI-G versions (13, 9, 8, 7 items ranging between r = 0.96 and r = 0.99 (p Conclusions The TCCA-OAO procedure was validated regarding its feasibility and applicability for objectively-assessed outcome measures. The resulting German CAHAI can be used as a valid and reliable assessment for bilateral upper limb performance in ADL in patients after stroke.
Energy Technology Data Exchange (ETDEWEB)
Baioco, Juliana Souza; Seckler, Carolina dos Santos; Silva, Karinna Freitas da; Jacob, Breno Pinheiro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Metodos Computacionais e Sistemas Offshore; Silvestre, Jose Roberto; Soares, Antonio Claudio; Freitas, Sergio Murilo Santos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas
2008-07-01
The perforation process is an important in well construction. It provides contact between the reservoir rock and the well, allowing oil production. The procedure consists in using explosive charges to bore a hole into the casing and the rock, so that the reservoir fluid can flow to the well. Therefore, the right choice of both the gun and the charge type is extremely important, knowing that many factors influence on the process, affecting the productivity, such as shot density, penetration depth, hole diameter, etc. The objective of this paper is to present the results of some parametric study to evaluate the influence of some parameters related to the explosive charges on well productivity, since there are many types of charges with different properties, which provide specific characteristics to the perforated area. For that purpose, a commercial program will be used, which allows the simulation of the flow problem, along with a finite element mesh generator that uses a pre-processor and a program that enables the construction of reservoir, well and perforation models. It can be observed that the penetration depth has bigger influence than the hole diameter, being an important factor when choosing the charge to be used in the project. (author)
Energy Technology Data Exchange (ETDEWEB)
Crapse, K.; Cozzi, A.; Crawford, C.; Jurgensen, A.
2006-09-30
In order to assess the effect of extended curing times at elevated temperatures on saltstone containing Tank 48H waste, saltstone samples prepared as a part of a separate study were analyzed for benzene using a modification of the United States Environmental Protection Agency (USEPA) method 1311 Toxicity Characteristic Leaching Procedure (TCLP). To carry out TCLP for volatile organic analytes (VOA), such as benzene, in the Savannah River National Laboratory (SRNL) shielded cells (SC), a modified TCLP Zero-Headspace Extractor (ZHE) was developed. The modified method was demonstrated to be acceptable in a side by side comparison with an EPA recommended ZHE using nonradioactive saltstone containing tetraphenylborate (TPB). TCLP results for all saltstone samples tested containing TPB (both simulant and actual Tank 48H waste) were below the regulatory limit for benzene (0.5 mg/L). In general, higher curing temperatures corresponded to higher concentrations of benzene in TCLP extract. The TCLP performed on the simulant samples cured under the most extreme conditions (3000 mg/L TPB in salt and cured at 95 C for at least 144 days) resulted in benzene values that were greater than half the regulatory limit. Taking into account that benzene in TCLP extract was measured on the same order of magnitude as the regulatory limit, that these experimental conditions may not be representative of actual curing profiles found in the saltstone vault and that there is significant uncertainty associated with the precision of the method, it is recommended that to increase confidence in TCLP results for benzene, the maximum curing temperature of saltstone be less than 95 C. At this time, no further benzene TCLP testing is warranted. Additional verification would be recommended, however, should future processing strategies result in significant changes to salt waste composition in saltstone as factors beyond the scope of this limited study may influence the decomposition of TPB in saltstone.
Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S
2017-03-01
Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.
A new interpretive procedure for whole rock U-Pb Systems applied to the Vredefort crustal profile
Welke, H.; Nicolaysen, L. O.
1981-11-01
Granulite grade Precambrian gneisses have usually undergone at least one period of strong U depletion. Whole rock U-Pb isotope studies can determine the time(s) of the severe depletion, and this work attempts to place such studies on a more rigorous footing. Two-stage U-Pb systems can be described in terms of one major, episodic differentiation into rocks with varying U/Pb ratios, while three-stage systems can be described by two such distinct episodes. Most of the Precambrian granulites that have been isotopically analyzed have histories too complex to be described as two-stage systems. However, it is demonstrated here that U-Pb data on whole rock suites can yield the complete U-Pb chemical history of a three-stage system (in terms of U/Pb ratios). For a suite of granulites, present-day 207Pb/204Pb and 206Pb/204Pb ratios and element concentration data allow these ratios to be calculated at a number of specific past times and plotted as an array. The degree of scatter in each of these `past arrays' is graphed as a function of time. The point of least scatter denotes the age of the end of stage 2 in the history of the system. The array slope and the dating of the end of stage 2 also permit the beginning of stage 2 to be calculated. All other parameters in the system (U and Pb concentrations, Pb isotopic ratios) can now be determined for each individual rock throughout its history. The new interpretive method also distinguishes sensitively among various kinds of uranium fractionation which may have operated during the differentiation episodes. It is applied here to uranium-depleted granulites in the deeper part of the Vredefort crustal profile. The times of the two fractionating episodes are calculated at ˜3860 and ˜2760 m.y., respectively. The Vredefort 3070 m.y. event, when geochemical systems in the upper half of the crystalline basement became permanently closed, evidently had little significance for the lower half of the crystalline basement. Some fundamental
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Artistico, Daniele; Pinto, Angela Marinilli; Douek, Jill; Black, Justin; Pezzuti, Lina
2013-01-01
The objective of the study was to develop a novel procedure to increase self-efficacy for exercise. Gains in one's ability to resolve day-to-day obstacles for entering an exercise routine were expected to cause an increase in self-efficacy for exercise. Fifty-five sedentary participants (did not exercise regularly for at least 4 months prior to the study) who expressed an intention to exercise in the near future were selected for the study. Participants were randomly assigned to one of three conditions: (1) an Experimental Group in which they received a problem-solving training session to learn new strategies for solving day-to-day obstacles that interfere with exercise, (2) a Control Group with Problem-Solving Training which received a problem-solving training session focused on a typical day-to-day problem unrelated to exercise, or (3) a Control Group which did not receive any problem-solving training. Assessment of obstacles to exercise and perceived self-efficacy for exercise were conducted at baseline; perceived self-efficacy for exercise was reassessed post-intervention (1 week later). No differences in perceived challenges posed by obstacles to exercise or self-efficacy for exercise were observed across groups at baseline. The Experimental Group reported greater improvement in self-efficacy for exercise compared to the Control Group with Training and the Control Group. Results of this study suggest that a novel procedure that focuses on removing obstacles to intended planned fitness activities is effective in increasing self-efficacy to engage in exercise among sedentary adults. Implications of these findings for use in applied settings and treatment studies are discussed.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Institute of Scientific and Technical Information of China (English)
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
2010-01-01
Abstract Background Standardised translation and cross-cultural adaptation (TCCA) procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO) measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI) is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO) based on...
Turner, Gren; Rawlins, Barry; Wragg, Joanna; Lark, Murray
2014-05-01
Aggregate stability is an important physical indicator of soil quality and influences the potential for erosive losses from the landscape, so methods are required to measure it rapidly and cost-effectively. Previously we demonstrated a novel method for quantifying the stability of soil aggregates using a laser granulometer (Rawlins et al., 2012). We have developed our method further to mimic field conditions more closely by incorporating a procedure for pre-wetting aggregates (for 30 minutes on a filter paper) prior to applying the test. The first measurement of particle-size distribution is made on the water stable aggregates after these have been added to circulating water (aggregate size range 1000 to 2000 µm). The second measurement is made on the disaggregated material after the circulating aggregates have been disrupted with ultrasound (sonication). We then compute the difference between the mean weight diameters (MWD) of these two size distributions; we refer to this value as the disaggregation reduction (DR; µm). Soils with more stable aggregates, which are resistant to both slaking and mechanical breakdown by the hydrodynamic forces during circulation, have larger values of DR. We made repeated analyses of DR using an aggregate reference material (RM; a paleosol with well-characterised disaggregation properties) and used this throughout our analyses to demonstrate our approach was reproducible. We applied our modified technique - and also the previous technique in which dry aggregates were used - to a set of 60 topsoil samples (depth 0-15 cm) from cultivated land across a large region (10 000 km2) of eastern England. We wished to investigate: (i) any differences in aggregate stability (DR measurements) using dry or pre-wet aggregates, and (ii) the dominant controls on the stability of aggregates in water using wet aggregates, including variations in mineralogy and soil organic carbon (SOC) content, and any interaction between them. The sixty soil
Energy Technology Data Exchange (ETDEWEB)
Fouz, M. C.; Puerta Pelayo, J.
2004-07-01
In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.
Negative Average Preference Utilitarianism
Directory of Open Access Journals (Sweden)
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
Institute of Scientific and Technical Information of China (English)
DENG MaoLin; ZHU WeiQiu
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for uasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics. In the present paper, the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced. The applications of the stochastic averaging method in studying the dynamics of active Brownian particles, the reaction rate theory, the dynamics of breathing and denaturation of DNA, and the Fermi resonance and its effect on the mean transition time are re-viewed.
Some applications of stochastic averaging method for quasi Hamiltonian systems in physics
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Many physical systems can be modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems can be applied to yield reasonable approximate response sta-tistics.In the present paper,the basic idea and procedure of the stochastic averaging method for quasi Hamiltonian systems are briefly introduced.The applications of the stochastic averaging method in studying the dynamics of active Brownian particles,the reaction rate theory,the dynamics of breathing and denaturation of DNA,and the Fermi resonance and its effect on the mean transition time are reviewed.
Quantum Averaging of Squeezed States of Light
DEFF Research Database (Denmark)
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Mena Velasco, Sonia
2016-01-01
This dissertation deals with the analysis of the translation of two books originally written by J. K. Rowling. For this purpose, we have studied translation procedures and translation methods, as well as redundancy and cohesion. We have taken into account experts in translation, such as Newmark, Amparo Hurtado Albir, and Lucía Molina. The aim of this paper is to have a further approach to translation and consequently, understand the process involved in this subject. El presente trabajo se ...
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-01
On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordance with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.
Discrete Averaging Relations for Micro to Macro Transition
Liu, Chenchen; Reina, Celia
2016-05-01
The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.
The averaging of nonlocal Hamiltonian structures in Whitham's method
Directory of Open Access Journals (Sweden)
Andrei Ya. Maltsev
2002-01-01
Full Text Available We consider the m-phase Whitham's averaging method and propose the procedure of averaging nonlocal Hamiltonian structures. The procedure is based on the existence of a sufficient number of local-commuting integrals of the system and gives the Poisson bracket of Ferapontov type for Whitham's system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2008-01-01
textabstractA Bayesian model averaging procedure is presented that makes use of a finite mixture of many model structures within the class of vector autoregressive (VAR) processes. It is applied to two empirical issues. First, stability of the Great Ratios in U.S. macro-economic time series is inves
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
Sampling Based Average Classifier Fusion
Directory of Open Access Journals (Sweden)
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
Institute of Scientific and Technical Information of China (English)
张勋; 李凌艳; 刘红云; 孙研
2013-01-01
Matrix sampling is a useful technique widely used in large-scale educational assessments. In an assessment with matrix sampling design, each examinee takes one of the multiple booklets with partial items. A critical problem of detecting differential item functioning (DIF) in such scenario has gained a lot of attention in recent years, which is, it is not appropriate to take the observed total score obtained from individual booklet as the matching variable in detecting the DIF. Therefore, the traditional detecting methods, such as Mantel-Haenszel (MH), SIBTEST, as well as Logistic Regression (LR) are not suitable. IRT_Δb might be an alternative due to its abilities to provide valid matching variable. However, the DIF classification criterion of IRT_Δb was not well established yet. Thus, the purpose of this study were: 1) to investigate the efficiency and robustness of using ability parameters obtained from Item Response Theory (IRT) model as the matching variable, comparing with the way using traditional observed raw total scores;2) to further identify what factors will influence the abilities in detecting DIF of two methods;3) to propose a DIF classification criteria for IRT_Δb. Simulated and empirical data were both employed in this study to explore the robustness and the efficiency of the two prevailing DIF detecting methods, which were the IRT_Δb method and the adapted LR method with the estimation of group-level ability based on IRT model as the matching variable. In the Monte Carlo study, a matrix sampling test was generated, and various experimental conditions were simulated as follows:1) different proportions of DIF items;2) different actual examinee ability distributions;3) different sample sizes;4) different size of DIF. Two DIF detection methods were then applied and results were compared. In addition, power functions were established in order to derive DIF classification rule for IRT_Δb based on current rules for LR. In the empirical study, through
Energy Technology Data Exchange (ETDEWEB)
Malone, Dermot E.; Maceneaney, Peter M
2000-12-01
AIM: To compare and contrast interventional radiology (IR) clinical and research practices with the technology assessment and evidence-based medicine (EBM) paradigms and make suggestions for the phased evaluation of new IR procedures. MATERIALS AND METHODS: Course literature of the Association of University Radiologists' 'Basic Technology Assessment for Radiologists' course and the McMaster University Health Information Research Unit's 'How to Teach Evidence-Based Medicine 1999' course were used to identify major publications in each discipline. A computer search was performed to seek other relevant literature. A model of traditional development of IR procedures was developed. Suggestions for the phased evaluation of IR procedures were derived. RESULTS: As in diagnostic radiology, several levels of progressively stronger IR study design can be described and related to EBM 'levels of evidence'. These range from case reports and case series through case-control and cohort studies to randomized controlled trials (RCTs). The major weakness in the existing IR literature is the predominance of small, uncontrolled, case series. Randomized controlled trials are likely to provide the best possible evidence of effectiveness. They are expensive and randomization is sometimes unethical or impractical. Case-control and cohort studies have been under-utilized. Evidence-based medicine indices of benefit and harm have not yet been applied in IR and may have clinical advantages over traditional statistical methods. A literature search (10 years) using MeSH terms 'radiology, interventional' and 'efficacy' yielded 30 papers. Combining 'radiology, interventional' and 'evidence-based medicine' yielded no papers. Comparative searches substituting the term 'diagnostic imaging' for 'radiology, interventional' yielded 4883 and 62 papers, respectively. CONCLUSION: Principles of technology
Gaussian moving averages and semimartingales
DEFF Research Database (Denmark)
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
Institute of Scientific and Technical Information of China (English)
王立军; 刘秀芬; 贾金霞; 陈娜
2012-01-01
Objective:To disuss the effect of applying standard nursing procedures to prevent falls in hospitalized elderly patients. Methods:4377 elderly aged over 65 years old patients as observation group from January to December in 2011 were accepted standard nursing procedures ,4025 patients from January to December in 2010 as control group were not accepted standard nursing procedures,to compared the occur rate of falling of two groups. Results:The occur rate of falling in observation group was lower than in control group(P<0.05). Conclusion :Standard nursing procedures could reduced the occur rate of falling of elderly inpatients effectively.%目的:探讨应用标准护理程序预防住院老年患者跌倒的效果.方法:将实施标准护理程序后即2011年1 ～12月的4377例65岁以上的老年患者作为观察组,观察跌倒发生情况,并与2010年1 ～12月未实施标准护理程序的65岁以上的4025例患者作为对照组进行比较.结果:观察组跌倒发生率明显低于对照组(P＜0.05).结论:应用标准护理程序可有效减少住院老年患者的跌倒发生率.
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker......" variance estimator derived from the "small bandwidth" asymptotic framework. The results of a small-scale Monte Carlo experiment are found to be consistent with the theory and indicate in particular that sensitivity with respect to the bandwidth choice can be ameliorated by using the "robust...
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Asymmetric multifractal detrending moving average analysis in time series of PM2.5 concentration
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Li, Jingming; Zhou, Longfei
2016-09-01
In this paper, we propose the asymmetric multifractal detrending moving average analysis (A-MFDMA) method to explore the asymmetric correlation in non-stationary time series. The proposed method is applied to explore the asymmetric correlation of PM2.5 daily average concentration with uptrends or downtrends in China. In addition, shuffling and phase randomization procedures are applied to detect the sources of multifractality. The results show that existences of asymmetric correlations, and the asymmetric correlations are multifractal. Further, the multifractal scaling behavior in the Chinese PM2.5 is caused not only by long-range correlation but also by fat-tailed distribution, but the major source of multifractality is fat-tailed distribution.
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
The Optimal Selection for Restricted Linear Models with Average Estimator
Directory of Open Access Journals (Sweden)
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
High average power supercontinuum sources
Indian Academy of Sciences (India)
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages ob...... obtained by Monte-Carlo sampling.......We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
Time-average dynamic speckle interferometry
Vladimirov, A. P.
2014-05-01
For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Directory of Open Access Journals (Sweden)
Jan Vagedes
Full Text Available OBJECTIVE: The number of desaturations determined in recordings of pulse oximeter saturation (SpO2 primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. METHODS: Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2-4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D. The whole procedure was carried out for 7 different minimal desaturation durations (≥ 1, ≥ 5, ≥ 10, ≥ 15, ≥ 20, ≥ 25, ≥ 30 s below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. RESULTS: Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1(c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. CONCLUSION: This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations.
Usuelli, F G; Montrasio, U Alfieri
2012-06-01
Flexible flatfoot is one of the most common deformities. Arthroereisis procedures are designed to correct this deformity. Among them, the calcaneo-stop is a procedure with both biomechanical and proprioceptive properties. It is designed for pediatric treatment. Results similar to endorthesis procedure are reported. Theoretically the procedure can be applied to adults if combined with other procedures to obtain a stable plantigrade foot, but medium-term follow up studies are missing.
Fast Moving Average Recursive Least Mean Square Fit
2016-06-07
Method 4.2 Numerical Simulation 4.3 Speed Comparisons .• 4.4 Discussion of Results • SUMMARY AND CONCLUSIONS REFERENCES...the method of implementation, numerical accuracy, computer simulation procedure, and the result of computing timings between the batch and the...due to reduced computation could make the moving average LMSF procedure competitive for many real-time processing applications . 15. SUBJECT TERMS
Spatial averaging infiltration model for layered soil
Institute of Scientific and Technical Information of China (English)
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
Spatial averaging infiltration model for layered soil
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.
The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field
Energy Technology Data Exchange (ETDEWEB)
Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)
1992-01-01
Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)
Institute of Scientific and Technical Information of China (English)
张争鸣; 曹春兰; 孙承欢; 沈海文
2012-01-01
In order to integrate nursing ethics knowledge with practice as early as possible, we discussed and studied the application of nursing ethics in basic nursing practices, and planned to incorporate nursing ethics knowledge into basic nursing procedures when teaching basic nursing practice at school, so that nursing students can learn how to reasonably and flexibly apply nursing ethics%为了使护理伦理学知识能尽早和实践相结合,我们对护理伦理学在基础护理实践中的运用进行了讨论研究,并设计在学校进行基础护理实践课时就把护理伦理学知识渗透到基础护理实践操作中,使护生在进入临床之前就可以学会把护理伦理学知识合理灵活的应用到护理操作中,以提高护生的伦理道德修养和护理质量。
Chen, Chun-Hung; Wu, Ho-Ting; Ke, Kai-Wei
Simulations are often deployed to evaluate proposed mechanisms or algorithms in Mobile Ad Hoc Networks (MANET). In MANET, the impacts of some simulation parameters are noticeable, such as transmission range, data rate etc. However, the effect of mobility model is not clear until recently. Random Waypoint (RWP) is one of the most applied nodal mobility models in many simulations due to its clear procedures and easy employments. However, it exhibits the two major problems: decaying average speed and border effect. Both problems will overestimate the performance of the employed protocols and applications. Although many recently proposed mobility models are able to reduce or eliminate the above-mentioned problems, the concept of Diverse Average Speed (DAS) has not been introduced. DAS aims to provide different average speeds within the same speed range. In most mobility models, the average speed is decided when the minimum and maximum speeds are set. In this paper, we propose a novel mobility model, named General Ripple Mobility Model (GRMM). GRMM targets to provide a uniform nodal spatial distribution and DAS without decaying average speed. The simulations and analytic results have demonstrated the merits of the outstanding properties of the GRMM model.
Simple Moving Average: A Method of Reporting Evolving Complication Rates.
Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J
2016-09-01
Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Model uncertainty and Bayesian model averaging in vector autoregressive processes
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2006-01-01
textabstractEconomic forecasts and policy decisions are often informed by empirical analysis based on econometric models. However, inference based upon a single model, when several viable models exist, limits its usefulness. Taking account of model uncertainty, a Bayesian model averaging procedure i
Discontinuous transformations and averaging for vibro-impact analysis
DEFF Research Database (Denmark)
Thomsen, Jon Juel; Fidlin, A.
2004-01-01
Certain vibro-impact problems can be conveniently solved by discontinuous transformations combined with averaging. We briefly outline the background for this, and then focus on illustrating the procedure for specific examples: A self-excited friction oscillator with one- or two-sided stops, and a...
Averaged Null Energy Condition from Causality
Hartman, Thomas; Tajdini, Amirhossein
2016-01-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
The averaging of non-local Hamiltonian structures in Whitham's method
Maltsev, A Y
1999-01-01
We consider the m-phase Whitham's averaging method and propose the procedure of "averaging" of non-local Hamiltonian structures. The procedure is based on the existence of sufficient number of local commuting integrals of the system and gives the Poisson bracket of Ferapontov type for the Whitham system. The method can be considered as the generalization of the Dubrovin-Novikov procedure for the local field-theoretical brackets.
Separability criteria with angular and Hilbert space averages
Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia
2016-05-01
The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ - ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.
Estimating PIGLOG Demands Using Representative versus Average Expenditure
Hahn, William F.; Taha, Fawzi A.; Davis, Christopher G.
2013-01-01
Economists often use aggregate time series data to estimate consumer demand functions. Some of the popular applied demand systems have a PIGLOG form. In the most general PIGLOG cases the “average” demand for a good is a function of the representative consumer expenditure not the average consumer expenditure. We would need detailed information on each period’s expenditure distribution to calculate the representative expenditure. This information is generally unavailable, so average expenditure...
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2007-01-01
textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent s
25 CFR 700.173 - Average net earnings of business or farm.
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Klein, Jessie W.; Patev, Paul
1998-01-01
Presents three experiments to introduce students to different kinds of chromatography: (1) paper chromatography; (2) gel filtration chromatography; and (3) reverse-phase liquid chromatography. Written in the form of a laboratory manual, explanations of each of the techniques, materials needed, procedures, and a glossary are included. (PVD)
Ramponi, Denise R
2016-01-01
Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures.
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), which expresses dimensionality reduction as an average of the subspaces spanned by the data. Because averages...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
The background effective average action approach to quantum gravity
DEFF Research Database (Denmark)
D’Odorico, G.; Codello, A.; Pagani, C.
2016-01-01
of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....
A Favré averaged transition prediction model for hypersonic flows
Institute of Scientific and Technical Information of China (English)
LEE; ChunHian
2010-01-01
Transition prediction is crucial for aerothermodynamic and thermal protection system design of hypersonic vehicles.The compressible form of laminar kinetic energy equation is derived based on Favréaverage formality in the present paper.A closure of the equation is deduced and simplified under certain hypotheses and scaling analysis.A laminar-to-turbulent transition prediction procedure is proposed for high Mach number flows based on the modeled Favré-averaged laminar kinetic energy equation,in conjunction with the Favré-averaged Navier-Stokes equations.The proposed model,with and without associated explicit compressibility terms,is then applied to simulate flows over flared-cones with a free-stream Mach number of 5.91,and the onset locations of the boundary layer transition under different wall conditions are estimated.The computed onset locations are compared with those obtained by the model based on a compressibility correction deduced from the reference-temperature concept,together with experimental data.It is revealed that the present model gives a more favorable transition prediction for hypersonic flows.
Newhouse, Vernon L
1975-01-01
Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
Institute of Scientific and Technical Information of China (English)
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Stochastic averaging of quasi-Hamiltonian systems
Institute of Scientific and Technical Information of China (English)
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
2010-10-01
... OF DEFENSE SOCIOECONOMIC PROGRAMS FOREIGN ACQUISITION Balance of Payments Program 225.7502 Procedures. If the Balance of Payments Program applies to the acquisition, follow the procedures at PGI 225.7502....
Logan, J David
2013-01-01
Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat
Average sampling theorems for shift invariant subspaces
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Manoharan, Asha; Dreisbach, Joseph H.
1988-01-01
Describes some examples of chemical and industrial applications of enzymes. Includes a background, a discussion of structure and reactivity, enzymes as therapeutic agents, enzyme replacement, enzymes used in diagnosis, industrial applications of enzymes, and immobilizing enzymes. Concludes that applied enzymology is an important factor in…
Average quantum dynamics of closed systems over stochastic Hamiltonians
Yu, Li
2011-01-01
We develop a master equation formalism to describe the evolution of the average density matrix of a closed quantum system driven by a stochastic Hamiltonian. The average over random processes generally results in decoherence effects in closed system dynamics, in addition to the usual unitary evolution. We then show that, for an important class of problems in which the Hamiltonian is proportional to a Gaussian random process, the 2nd-order master equation yields exact dynamics. The general formalism is applied to study the examples of a two-level system, two atoms in a stochastic magnetic field and the heating of a trapped ion.
Averaging in Parametrically Excited Systems – A State Space Formulation
Directory of Open Access Journals (Sweden)
Pfau Bastian
2016-01-01
Full Text Available Parametric excitation can lead to instabilities as well as to an improved stability behavior, depending on whether a parametric resonance or anti-resonance is induced. In order to calculate the stability domains and boundaries, the method of averaging is applied. The problem is reformulated in state space representation, which allows a general handling of the averaging method especially for systems with non-symmetric system matrices. It is highlighted that this approach can enhance the first order approximation significantly. Two example systems are investigated: a generic mechanical system and a flexible rotor in journal bearings with adjustable geometry.
Schiehlen, Werner
2014-01-01
Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
Changing mortality and average cohort life expectancy
DEFF Research Database (Denmark)
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Yearly-averaged daily usefulness efficiency of heliostat surfaces
Energy Technology Data Exchange (ETDEWEB)
Elsayed, M.M.; Habeebuallah, M.B.; Al-Rabghi, O.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia))
1992-08-01
An analytical expression for estimating the instantaneous usefulness efficiency of a heliostat surface is obtained. A systematic procedure is then introduced to calculate the usefulness efficiency even when overlapping of blocking and shadowing on a heliostat surface exist. For possible estimation of the reflected energy from a given field, the local yearly-averaged daily usefulness efficiency is calculated. This efficiency is found to depend on site latitude angle, radial distance from the tower measured in tower heights, heliostat position azimuth angle and the radial spacing between heliostats. Charts for the local yearly-averaged daily usefulness efficiency are presented for {phi} = 0, 15, 30, and 45 N. These charts can be used in calculating the reflected radiation from a given cell. Utilization of these charts is demonstrated.
Model averaging for semiparametric additive partial linear models
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
To improve the prediction accuracy of semiparametric additive partial linear models(APLM) and the coverage probability of confidence intervals of the parameters of interest,we explore a focused information criterion for model selection among ALPM after we estimate the nonparametric functions by the polynomial spline smoothing,and introduce a general model average estimator.The major advantage of the proposed procedures is that iterative backfitting implementation is avoided,which thus results in gains in computational simplicity.The resulting estimators are shown to be asymptotically normal.A simulation study and a real data analysis are presented for illustrations.
40 CFR 600.510-08 - Calculation of average fuel economy.
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...
40 CFR 600.510-86 - Calculation of average fuel economy.
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...
40 CFR 600.510-93 - Calculation of average fuel economy.
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy §...
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
On the Combination Procedure of Correlated Errors
Erler, Jens
2015-01-01
When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors.
On the combination procedure of correlated errors
Energy Technology Data Exchange (ETDEWEB)
Erler, Jens [Universidad Nacional Autonoma de Mexico, Instituto de Fisica, Mexico D.F. (Mexico)
2015-09-15
When averages of different experimental determinations of the same quantity are computed, each with statistical and systematic error components, then frequently the statistical and systematic components of the combined error are quoted explicitly. These are important pieces of information since statistical errors scale differently and often more favorably with the sample size than most systematical or theoretical errors. In this communication we describe a transparent procedure by which the statistical and systematic error components of the combination uncertainty can be obtained. We develop a general method and derive a general formula for the case of Gaussian errors with or without correlations. The method can easily be applied to other error distributions, as well. For the case of two measurements, we also define disparity and misalignment angles, and discuss their relation to the combination weight factors. (orig.)
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
Nunes-Alves, Ariane; Arantes, Guilherme Menegon
2014-08-25
Accurate calculations of free energies involved in small-molecule binding to a receptor are challenging. Interactions between ligand, receptor, and solvent molecules have to be described precisely, and a large number of conformational microstates has to be sampled, particularly for ligand binding to a flexible protein. Linear interaction energy models are computationally efficient methods that have found considerable success in the prediction of binding free energies. Here, we parametrize a linear interaction model for implicit solvation with coefficients adapted by ligand and binding site relative polarities in order to predict ligand binding free energies. Results obtained for a diverse series of ligands suggest that the model has good predictive power and transferability. We also apply implicit ligand theory and propose approximations to average contributions of multiple ligand-receptor poses built from a protein conformational ensemble and find that exponential averages require proper energy discrimination between plausible binding poses and false-positives (i.e., decoys). The linear interaction model and the averaging procedures presented can be applied independently of each other and of the method used to obtain the receptor structural representation.
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Energy Technology Data Exchange (ETDEWEB)
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Books average previous decade of economic misery.
Directory of Open Access Journals (Sweden)
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Energy Technology Data Exchange (ETDEWEB)
NONE
1993-12-31
From the title, the reader is led to expect a broad practical treatise on combustion and combustion devices. Remarkably, for a book of modest dimension, the author is able to deliver. The text is organized into 12 Chapters, broadly treating three major areas: combustion fundamentals -- introduction (Ch. 1), thermodynamics (Ch. 2), fluid mechanics (Ch. 7), and kinetics (Ch. 8); fuels -- coal, municipal solid waste, and other solid fuels (Ch. 4), liquid (Ch. 5) and gaseous (Ch. 6) fuels; and combustion devices -- fuel cells (Ch. 3), boilers (Ch. 4), Otto (Ch. 10), diesel (Ch. 11), and Wankel (Ch. 10) engines and gas turbines (Ch. 12). Although each topic could warrant a complete text on its own, the author addresses each of these major themes with reasonable thoroughness. Also, the book is well documented with a bibliography, references, a good index, and many helpful tables and appendices. In short, Applied Combustion does admirably fulfill the author`s goal for a wide engineering science introduction to the general subject of combustion.
Márquez Ruiz, Andrés; Martínez Guardiola, Francisco Javier; Gallego Rico, Sergi; Ortuño Sánchez, Manuel; Beléndez Vázquez, Augusto; Pascual Villalobos, Inmaculada
2014-01-01
Parallel-aligned liquid crystal on silicon (PA-LCoS) displays have become the most attractive spatial light modulator device for a wide range of applications, due to their superior resolution and light efficiency, added to their phase-only capability. Proper characterization of their linear retardance and phase flicker instabilities is a must to obtain an enhanced application of PA-LCoS. We present a novel polarimetric method, based on Stokes polarimetry, we have recently proposed for the mea...
Enhancing Trust in the Smart Grid by Applying a Modified Exponentially Weighted Averages Algorithm
2012-06-01
11 HVDC High Voltage Direct Current . . . . . . . . . . . . . . . . . . . . . . . . 19 etc. et cetera...System Separation 6.3 Turbine Valve Control 6.3 Load & Generator Rejection 4.5 Stabilizers 4.5 HVDC Controls 3.6 Out-of-Step Relaying 2.7 Discrete
Institute of Scientific and Technical Information of China (English)
肖红
2012-01-01
Applying summary procedures to handling criminal cases that the defendant pleads guilty is the inevitable choice to solve the contradictions that cases are more than oflficers in the procuratorial organs in recent years, but also the inevitable re- quirement of maintaining justice and pro- tecting the legitimate rights and interests of the parties. This article aims to analyze the situation that the prosecutors＂ appearing in court in criminal cases applied summary procedure interpret the impact and chal- lenges of the amendment of the Code of Criminal Procedure to the grass-roots procuratorates work, and explore new ini- tiatives to implement the prosecutors＂ ap- pearing in court to support the prosecution in criminal cases applied summary proce- dure.%适用简易程序办理被告人认罪的刑事案件，是解决近年来检察机关案多人少矛盾的必然选择，也是维护司法公正，保障当事人合法权益的必然要求。本文旨在通过对简易程序刑事案件公诉人出庭现状进行分析，解读刑事诉讼法修改对基层检察院工作带来的影响和挑战，探求践行简易程序刑事案件公诉人出庭支持公诉的新举措。
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
High average-power induction linacs
Energy Technology Data Exchange (ETDEWEB)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
Full averaging of fuzzy impulsive differential inclusions
Directory of Open Access Journals (Sweden)
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
Materials for high average power lasers
Energy Technology Data Exchange (ETDEWEB)
Marion, J.E.; Pertica, A.J.
1989-01-01
Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...
A dynamic analysis of moving average rules
C. Chiarella; X.Z. He; C.H. Hommes
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use
Cortical evoked potentials recorded from the guinea pig without averaging.
Walloch, R A
1975-01-01
Potentials evoked by tonal pulses and recorded with a monopolar electrode on the pial surface over the auditory cortex of the guinea pig are presented. These potentials are compared with average potentials recorded in previous studies with an electrode on the dura. The potentials recorded by these two techniques have similar waveforms, peak latencies and thresholds. They appear to be generated within the same region of the cerebral cortex. As can be expected, the amplitude of the evoked potentials recorded from the pial surface is larger than that recorded from the dura. Consequently, averaging is not needed to extract the evoked potential once the dura is removed. The thresholds for the evoked cortical potential are similar to behavioral thresholds for the guinea pig at high frequencies; however, evoked potential thresholds are eleveate over behavioral thresholds at low frequencies. The removal of the dura and the direct recording of the evoked potential appears most appropriate for acute experiments. The recording of an evoked potential with dura electrodes empploying averaging procedures appears most appropriate for chronic studies.
Fully variational average atom model with ion-ion correlations.
Starrett, C E; Saumon, D
2012-02-01
An average atom model for dense ionized fluids that includes ion correlations is presented. The model assumes spherical symmetry and is based on density functional theory, the integral equations for uniform fluids, and a variational principle applied to the grand potential. Starting from density functional theory for a mixture of classical ions and quantum mechanical electrons, an approximate grand potential is developed, with an external field being created by a central nucleus fixed at the origin. Minimization of this grand potential with respect to electron and ion densities is carried out, resulting in equations for effective interaction potentials. A third condition resulting from minimizing the grand potential with respect to the average ion charge determines the noninteracting electron chemical potential. This system is coupled to a system of point ions and electrons with an ion fixed at the origin, and a closed set of equations is obtained. Solution of these equations results in a self-consistent electronic and ionic structure for the plasma as well as the average ionization, which is continuous as a function of temperature and density. Other average atom models are recovered by application of simplifying assumptions.
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Averaged Extended Tree Augmented Naive Classifier
Directory of Open Access Journals (Sweden)
Aaron Meehan
2015-07-01
Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
The Ghirlanda-Guerra identities without averaging
Chatterjee, Sourav
2009-01-01
The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Geomagnetic effects on the average surface temperature
Ballatore, P.
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.
Disk-averaged synthetic spectra of Mars
Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy
2004-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
STRONG APPROXIMATION FOR MOVING AVERAGE PROCESSES UNDER DEPENDENCE ASSUMPTIONS
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Let {Xt, t ≥ 1} be a moving average process defined by Xt = ∞Σk=0akξt-k,where {ak,k ≥ 0} is a sequence of real numbers and {ξt,-∞＜ t ＜∞} is a doubly infinite sequence of strictly stationary dependent random variables. Under the conditions of {ak, k ≥ 0} which entail that {Xt, t ≥ 1} is either a long memory process or a linear process, the strong approximation of {Xt, t ≥ 1} to a Gaussian process is studied. Finally,the results are applied to obtain the strong approximation of a long memory process to a fractional Brownian motion and the laws of the iterated logarithm for moving average processes.
De Luca, G.; Magnus, J.R.
2011-01-01
This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa
Edgeworth expansion for the pre-averaging estimator
DEFF Research Database (Denmark)
Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro
In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....
Analytical network-averaging of the tube model:. Rubber elasticity
Khiêm, Vu Ngoc; Itskov, Mikhail
2016-10-01
In this paper, a micromechanical model for rubber elasticity is proposed on the basis of analytical network-averaging of the tube model and by applying a closed-form of the Rayleigh exact distribution function for non-Gaussian chains. This closed-form is derived by considering the polymer chain as a coarse-grained model on the basis of the quantum mechanical solution for finitely extensible dumbbells (Ilg et al., 2000). The proposed model includes very few physically motivated material constants and demonstrates good agreement with experimental data on biaxial tension as well as simple shear tests.
Condition monitoring of gearboxes using synchronously averaged electric motor signals
Ottewill, J. R.; Orkisz, M.
2013-07-01
Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different
Averaged Behaviour of Nonconservative Coupled Oscillators
Bakri, T.
2007-01-01
In this Thesis we study the dynamics of systems of two and three coupled oscillators by efficiently applying Normal Form theory. The subject of Coupled oscillators plays an important part in dynamical systems. It has a wide range of applications in various fields like physics, biology, economics and
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Fluctuations of wavefunctions about their classical average
Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Sparsity averaging for radio-interferometric imaging
Carrillo, Rafael E; Wiaux, Yves
2014-01-01
We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.
A sixth order averaged vector field method
Li, Haochen; Wang, Yushun; Qin, Mengzhao
2014-01-01
In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...
Fluctuations of wavefunctions about their classical average
Energy Technology Data Exchange (ETDEWEB)
Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)
2003-02-07
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Grassmann Averages for Scalable Robust PCA
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...
Quantum gravity unification via transfinite arithmetic and geometrical averaging
Energy Technology Data Exchange (ETDEWEB)
El Naschie, M.S. [Department of Physics, University of Alexandria (Egypt); Donghua University, Shanghai (China); Department of Astrophysics, University of Cairo (Egypt)], E-mail: Chaossf@aol.com
2008-01-15
In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of {epsilon}{sup ({infinity})} spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian {epsilon}{sup ({infinity})} manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68].
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Energy Technology Data Exchange (ETDEWEB)
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Local average height distribution of fluctuating interfaces
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
UV transmittance during the crosslinking procedure: tunable treatment
Lincoln, Victor A. C.; Mello, Marcio M.; Ventura, Liliane
2014-02-01
The transmittance of UVA light through the in vitro human cornea over the thickness of 400um during the corneal collagen cross-linking procedure has been measured using an optical fiber (600 μm core diameter) fixed just before the cornea and attached to Spectrophotometer. The 10 corneas, (average of 6 days post-mortem) were washed with saline and cross-linked with the currently used protocol. To enhance absorption of UV radiation, Riboflavin solution (0.1% and 400 mOsm) was applied prior to and during exposure. The UVA beam - 365nm +/- 5nm at 3mW/cm2 +/- 0.003mW/cm2 - was focused directly onto the corneal stroma. The measured average transmittance of the cornea without Riboflavin was 64.1%. Preceding the irradiation but after 6 applications of Riboflavin at 5min intervals (total of 30min) transmittance decreased to 21.1%. The 30min of irradiation were then accompanied by an additional 6 applications of Riboflavin at 5min intervals (for a total of treatment time of 1h), resulting in a further decrease in transmittance to 12.2%, which is in agreement with current literature. The average transmittance in terms of energy during the 30 minutes irradiation procedure fluctuated from 0.63 to 0.37 mW/cm2. These results indicate different levels of UV transmittance during treatment, leading to consider a new personalized treatment with tunable UV power irradiation.
Average resonance parameters evaluation for actinides
Energy Technology Data Exchange (ETDEWEB)
Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)
1997-03-01
New evaluated <{Gamma}{sub n}{sup 0}> and
Communication: Green-Kubo approach to the average swim speed in active Brownian systems
Sharma, A.; Brader, J. M.
2016-10-01
We develop an exact Green-Kubo formula relating nonequilibrium averages in systems of interacting active Brownian particles to equilibrium time-correlation functions. The method is applied to calculate the density-dependent average swim speed, which is a key quantity entering coarse grained theories of active matter. The average swim speed is determined by integrating the equilibrium autocorrelation function of the interaction force acting on a tagged particle. Analytical results are validated using Brownian dynamics simulations.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Directory of Open Access Journals (Sweden)
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros G; Wainwright, Martin J
2007-01-01
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...
Bivariate phase-rectified signal averaging
Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg
2008-01-01
Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.
Hedge algorithm and Dual Averaging schemes
Baes, Michel
2011-01-01
We show that the Hedge algorithm, a method that is widely used in Machine Learning, can be interpreted as a particular instance of Dual Averaging schemes, which have recently been introduced by Nesterov for regret minimization. Based on this interpretation, we establish three alternative methods of the Hedge algorithm: one in the form of the original method, but with optimal parameters, one that requires less a priori information, and one that is better adapted to the context of the Hedge algorithm. All our modified methods have convergence results that are better or at least as good as the performance guarantees of the vanilla method. In numerical experiments, our methods significantly outperform the original scheme.
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Ju, Dianshu; Dan, Kazuo; Fujiwara, Hiroyuki; Morikawa, Nobuyuki
2016-04-01
We proposed a procedure of evaluating fault parameters of asperity models for predicting strong ground motions from inland earthquakes caused by long strike-slip faults. In order to obtain averaged dynamic stress drops, we adopted the formula obtained by dynamic fault rupturing simulations for surface faults of the length from 15 to 100 km, because the formula of the averaged static stress drops for circular cracks, commonly adopted in existing procedures, cannot be applied to surface faults or long faults. The averaged dynamic stress drops were estimated to be 3.4 MPa over the entire fault and 12.2 MPa on the asperities, from the data of 10 earthquakes in Japan and 13 earthquakes in other countries. The procedure has a significant feature that the average slip on the seismic faults longer than about 80 km is constant, about 300 cm. In order to validate our proposed procedure, we made a model for a 141 km long strike-slip fault by our proposed procedure for strike-slip faults, predicted ground motions, and showed that the resultant motions agreed well with the records of the 1999 Kocaeli, Turkey, earthquake (Mw 7.6) and with the peak ground accelerations and peak ground velocities by the GMPE of Si and Midorikawa (1999).
Explicit expressions and recurrence formulas of radial average value for N-dimensional hydrogen atom
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
In this paper, two recurrence formulas for radial average values of N-dimensional hydrogen atom are derived. Explicit results can be applied to discuss average value of centrifugal potential energy and other physical quantities. The relevant results of the usual hydrogen atom are contained in more general conclusion of this paper as special cases.
COMPLEX INNER PRODUCT AVERAGING METHOD FOR CALCULATING NORMAL FORM OF ODE
Institute of Scientific and Technical Information of China (English)
陈予恕; 孙洪军
2001-01-01
This paper puts forward a complex inner product averaging method for calculating normal form of ODE. Compared with conventional averaging method, the theoretic analytical process has such simple forms as to realize computer program easily.Results can be applied in both autonomous and non-autonomous systems. At last, an example is resolved to verify the method.
Wang, Ying; Laborda, Eduardo; Salter, Chris; Crossley, Alison; Compton, Richard G
2012-10-21
A fast and cheap in situ approach is presented for the characterization of gold nanoparticles from electrochemical experiments. The average size and number of nanoparticles deposited on a glassy carbon electrode are determined from the values of the total surface area and amount of gold obtained by lead underpotential deposition and by stripping of gold in hydrochloric acid solution, respectively. The morphology of the nanoparticle surface can also be analyzed from the "fingerprint" in lead deposition/stripping experiments. The method is tested through the study of gold nanoparticles deposited on a glassy carbon substrate by seed-mediated growth method which enables an easy control of the nanoparticle size. The procedure is also applied to the characterization of supplied gold nanoparticles. The results are in satisfactory agreement with those obtained via scanning electron microscopy.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
... Center Access to Care Toolkit EHB Access Toolkit Bariatric Surgery Procedures Bariatric surgical procedures cause weight loss by ... Bariatric procedures also often cause hormonal changes. Most weight loss surgeries today are performed using minimally invasive techniques (laparoscopic ...
On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection
Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.
2016-11-01
Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.
Vagena, E.; Stoulos, S.
2017-01-01
Bremsstrahlung photon beam delivered by a linear electron accelerator has been used to experimentally determine the near threshold photonuclear cross section data of nuclides. For the first time, (γ, n) cross section data was obtained for the astrophysical important nucleus 162Er. Moreover, theoretical calculations have been applied using the TALYS 1.6 code. The effect of the gamma ray strength function on the cross section calculations has been studied. A satisfactorily reproduction of the available experimental data of photonuclear cross section at the energy region below 20 MeV could be achieved. The photon flux was monitored by measuring the photons yield from seven well known (γ, n) reactions from the threshold energy of each reaction up to the end-point energy of the photon beam used. An integrated cross-section 87 ± 14 mb is calculated for the photonuclear reaction 162Er (γ, n) at the energy 9.2-14 MeV. The effective cross section estimated using the TALYS code range between 89 and 96 mb depending on the γ-strength function used. To validate the method for the estimation of the average cross-section data of 162Er (γ, n) reaction, the same procedure has been performed to calculate the average cross-section data of 197Au (γ, n) and 55Mn (γ, n) reactions. In this case, the photons yield from the rest well known (γ, n) reactions was used in order to monitoring the photon flux. The results for 162Er (γ, n), 197Au (γ, n) and 55Mn (γ, n) are found to be in good agreement with the theoretical values obtained by TALYS 1.6. So, the present indirect process could be a valuable tool to estimate the effective cross section of (γ, n) reaction for various isotopes using bremsstrahlung beams.
A new approach for Bayesian model averaging
Institute of Scientific and Technical Information of China (English)
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Regulations and Procedures Manual
Energy Technology Data Exchange (ETDEWEB)
Young, Lydia J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2011-07-25
The purpose of the Regulations and Procedures Manual (RPM) is to provide LBNL personnel with a reference to University and Lawrence Berkeley National Laboratory (LBNL or Laboratory) policies and regulations by outlining normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory organizations. Much of the information in this manual has been condensed from detail provided in LBNL procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. RPM sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the LBNL organization responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which organization is responsible for a policy, please contact Requirements Manager Lydia Young or the RPM Editor.
Regulations and Procedures Manual
Energy Technology Data Exchange (ETDEWEB)
Young, Lydia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2010-09-30
The purpose of the Regulations and Procedures Manual (RPM) is to provide Laboratory personnel with a reference to University and Lawrence Berkeley National Laboratory policies and regulations by outlining the normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory departments. Much of the information in this manual has been condensed from detail provided in Laboratory procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. The sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the department responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which department should be called, please contact the Associate Laboratory Director of Operations.
Studies into the averaging problem: Macroscopic gravity and precision cosmology
Wijenayake, Tharake S.
2016-08-01
With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model
Low Average Sidelobe Slot Array Antennas for Radiometer Applications
Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.
2012-01-01
In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E
Dynamic Testing and Test Anxiety amongst Gifted and Average-Ability Children
Vogelaar, Bart; Bakker, Merel; Elliott, Julian G.; Resing, Wilma C. M.
2017-01-01
Background: Dynamic testing has been proposed as a testing approach that is less disadvantageous for children who may be potentially subject to bias when undertaking conventional assessments. For example, those who encounter high levels of test anxiety, or who are unfamiliar with standardized test procedures, may fail to demonstrate their true…
Safety analysis procedures for PHWR
Energy Technology Data Exchange (ETDEWEB)
Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong
2004-03-01
The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4.
Monegaglia, Federico; Henshaw, Alex; Zolezzi, Guido; Tubino, Marco
2016-04-01
Planform development of evolving meander bends is a beautiful and complex dynamic phenomenon, controlled by the interplay among hydrodynamics, sediments and floodplain characteristics. In the past decades, morphodynamic models of river meandering have provided a thorough understanding of the unit physical processes interacting at the reach scale during meander planform evolution. On the other hand, recent years have seen advances in satellite geosciences able to provide data with increasing resolution and earth coverage, which are becoming an important tool for studying and managing river systems. Analysis of the planform development of meandering rivers through Landsat satellite imagery have been provided in very recent works. Methodologies for the objective and automatic extraction of key river development metrics from multi-temporal satellite images have been proposed though often limited to the extraction of channel centerlines, and not always able to yield quantitative data on channel width, migration rates and bed morphology. Overcoming such gap would make a major step forward to integrate morphodynamic theories, models and real-world data for an increased understanding of meandering river dynamics. In order to fulfill such gaps, a novel automatic procedure for extracting and analyzing the topography and planform dynamics of meandering rivers through time from satellite images is implemented. A robust algorithm able to compute channel centerline in complex contexts such as the presence of channel bifurcations and anabranching structures is used. As a case study, the procedure is applied to the Landsat database for a reach of the well-known case of Rio Beni, a large, suspended load dominated, tropical meandering river flowing through the Bolivian Amazon Basin. The reach-averaged evolution of single bends along Rio Beni over a 30 years period is analyzed, in terms of bend amplification rates computed according to the local centerline migration rate. A
Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente
2011-01-01
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
Near-elastic vibro-impact analysis by discontinuous transformations and averaging
DEFF Research Database (Denmark)
Thomsen, Jon Juel; Fidlin, Alexander
2008-01-01
We show how near-elastic vibro-impact problems, linear or nonlinear in-between impacts, can be conveniently analyzed by a discontinuity-reducing transformation of variables combined with an extended averaging procedure. A general technique for this is presented, and illustrated by calculating tra...
Interpreting Sky-Averaged 21-cm Measurements
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
40 CFR 98.335 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b)...
NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design
Borcherdt, Roger D.
2015-01-01
Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.
Risk-sensitive reinforcement learning algorithms with generalized average criterion
Institute of Scientific and Technical Information of China (English)
YIN Chang-ming; WANG Han-xing; ZHAO Fei
2007-01-01
A new algorithm is proposed, which immolates the optimality of control policies potentially to obtain the robusticity of solutions. The robusticity of solutions maybe becomes a very important property for a learning system when there exists non-matching between theory models and practical physical system, or the practical system is not static,or the availability of a control action changes along with the variety of time. The main contribution is that a set of approximation algorithms and their convergence results are given. A generalized average operator instead of the general optimal operator max (or min) is applied to study a class of important learning algorithms, dynamic programming algorithms, and discuss their convergences from theoretic point of view. The purpose for this research is to improve the robusticity of reinforcement learning algorithms theoretically.
Analytic continuation average spectrum method for transport in quantum liquids
Energy Technology Data Exchange (ETDEWEB)
Kletenik-Edelman, Orly [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Rabani, Eran, E-mail: rabani@tau.ac.il [School of Chemistry, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978 (Israel); Reichman, David R. [Department of Chemistry, Columbia University, 3000 Broadway, New York, NY 10027 (United States)
2010-05-12
Recently, we have applied the analytic continuation averaged spectrum method (ASM) to calculate collective density fluctuations in quantum liquid . Unlike the maximum entropy (MaxEnt) method, the ASM approach is capable of revealing resolved modes in the dynamic structure factor in agreement with experiments. In this work we further develop the ASM to study single-particle dynamics in quantum liquids with dynamical susceptibilities that are characterized by a smooth spectrum. Surprisingly, we find that for the power spectrum of the velocity autocorrelation function there are pronounced differences in comparison with the MaxEnt approach, even for this simple case of smooth unimodal dynamic response. We show that for liquid para-hydrogen the ASM is closer to the centroid molecular dynamics (CMD) result while for normal liquid helium it agrees better with the quantum mode coupling theory (QMCT) and with the MaxEnt approach.
Spatial Games Based on Pursuing the Highest Average Payoff
Institute of Scientific and Technical Information of China (English)
YANG Han-Xin; WANG Bing-Hong; WANG Wen-Xu; RONG Zhi-Hai
2008-01-01
We propose a strategy updating mechanism based on pursuing the highest average payoff to investigate the prisoner's dilemma game and the snowdrift game. We apply the new rule to investigate cooperative behaviours on regular, small-world, scale-free networks, and find spatial structure can maintain cooperation for the prisoner's dilemma game. In the snowdrift game, spatial structure can inhibit or promote cooperative behaviour which depends on payoff parameter. We further study cooperative behaviour on scale-free network in detail. Interestingly, non-monotonous behaviours observed on scale-free network with middle-degree individuals have the lowest cooperation level. We also find that large-degree individuals change their strategies more frequently for both games.
The average rate of change for continuous time models.
Kelley, Ken
2009-05-01
The average rate of change (ARC) is a concept that has been misunderstood in the applied longitudinal data analysis literature, where the slope from the straight-line change model is often thought of as though it were the ARC. The present article clarifies the concept of ARC and shows unequivocally the mathematical definition and meaning of ARC when measurement is continuous across time. It is shown that the slope from the straight-line change model generally is not equal to the ARC. General equations are presented for two measures of discrepancy when the slope from the straight-line change model is used to estimate the ARC in the case of continuous time for any model linear in its parameters, and for three useful models nonlinear in their parameters.
HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring; David I. Gertman; Katya Le Blanc
2011-09-01
This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Energy Technology Data Exchange (ETDEWEB)
Edeling, W.N., E-mail: W.N.Edeling@tudelft.nl [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands); Cinnella, P., E-mail: P.Cinnella@ensam.eu [Arts et Métiers ParisTech, DynFluid laboratory, 151 Boulevard de l' Hospital, 75013 Paris (France); Dwight, R.P., E-mail: R.P.Dwight@tudelft.nl [Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 2, Delft (Netherlands)
2014-10-15
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier–Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Capacity Achieving Modulation for Fixed Constellations with Average Power Constraint
Bocherer, Georg; Mathar, Rudolf
2010-01-01
The capacity achieving probability mass function (PMF) of a finite signal constellation with an average power constraint is in most cases non-uniform. A common approach to generate non-uniform input PMFs is Huffman shaping, which consists of first approximating the capacity achieving PMF by a sampled Gaussian density and then to calculate the Huffman code of the sampled Gaussian density. The Huffman code is then used as a prefix-free modulation code. This approach showed good results in practice, can however lead to a significant gap to capacity. In this work, a method is proposed that efficiently constructs optimal prefix-free modulation codes for any finite signal constellation with average power constraint in additive noise. The proposed codes operate as close to capacity as desired. The major part of this work elaborates an analytical proof of this property. The proposed method is applied to 64-QAM in AWGN and numeric results are given, which show that, opposed to Huffman shaping, by using the proposed me...
Yearly average performance of the principal solar collector types
Energy Technology Data Exchange (ETDEWEB)
Rabl, A.
1981-01-01
The results of hour-by-hour simulations for 26 meteorological stations are used to derive universal correlations for the yearly total energy that can be delivered by the principal solar collector types: flat plate, evacuated tubes, CPC, single- and dual-axis tracking collectors, and central receiver. The correlations are first- and second-order polynomials in yearly average insolation, latitude, and threshold (= heat loss/optical efficiency). With these correlations, the yearly collectible energy can be found by multiplying the coordinates of a single graph by the collector parameters, which reproduces the results of hour-by-hour simulations with an accuracy (rms error) of 2% for flat plates and 2% to 4% for concentrators. This method can be applied to collectors that operate year-around in such a way that no collected energy is discarded, including photovoltaic systems, solar-augmented industrial process heat systems, and solar thermal power systems. The method is also recommended for rating collectors of different type or manufacturer by yearly average performance, evaluating the effects of collector degradation, the benefits of collector cleaning, and the gains from collector improvements (due to enhanced optical efficiency or decreased heat loss per absorber surface). For most of these applications, the method is accurate enough to replace a system simulation.
Predictive RANS simulations via Bayesian Model-Scenario Averaging
Edeling, W. N.; Cinnella, P.; Dwight, R. P.
2014-10-01
The turbulence closure model is the dominant source of error in most Reynolds-Averaged Navier-Stokes simulations, yet no reliable estimators for this error component currently exist. Here we develop a stochastic, a posteriori error estimate, calibrated to specific classes of flow. It is based on variability in model closure coefficients across multiple flow scenarios, for multiple closure models. The variability is estimated using Bayesian calibration against experimental data for each scenario, and Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors, to obtain a stochastic estimate of a Quantity of Interest (QoI) in an unmeasured (prediction) scenario. The scenario probabilities in BMSA are chosen using a sensor which automatically weights those scenarios in the calibration set which are similar to the prediction scenario. The methodology is applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth. Furthermore, the mean of the estimate is more consistently accurate than the individual model predictions.
Hearing Office Average Processing Time Ranking Report, February 2016
Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...
New procedure for departure formalities
HR & GS Departments
2011-01-01
As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
Directory of Open Access Journals (Sweden)
Abbas Mahmoudabadi
2013-03-01
Full Text Available In the present paper, an effective procedure is proposed to determine the best location(s for installing Weigh in Motion systems (WIM. The main objective is to determine locations for best performance, defined as the maximum number of once-checked trucks' axle loads and minimizing unnecessary actions. The aforesaid method consists of two main stages, including solving shortest path algorithm and selecting the best location for installing WIM(s. A proper mathematical model has also been developed to achieve objective function. The number of once-checked trucks, unnecessary actions and average installing costs are defined as criteria measures. The proposed procedure was applied in a road network using experimental data, while the results were compared with the usual methods of locating enforcement facilities. Finally, it is concluded that the proposed procedure seems to be more efficient than the traditional methods and local experts' points of view.
DEFF Research Database (Denmark)
Litvan, Héctor; Jensen, Erik W; Galan, Josefina;
2002-01-01
The extraction of the middle latency auditory evoked potentials (MLAEP) is usually done by moving time averaging (MTA) over many sweeps (often 250-1,000), which could produce a delay of more than 1 min. This problem was addressed by applying an autoregressive model with exogenous input (ARX) that...
Directory of Open Access Journals (Sweden)
Wei-Ru Chen
2003-12-01
Full Text Available 藉由資訊傳播科技的應用，數位媒體設計得以虛擬團隊的合作方式，整合分散各地不同領域的專業人才，共同完成設計任務。本研究針對業界目前應用虛擬合作之數位媒體設計團隊進行個案訪談，探討設計團隊如何以虛擬合作之方式來進行設計活動，歸納出虛擬團隊的溝通策略與合作流程，並分析數位媒體設計團隊進行虛擬合作之優劣勢。研究結果顯示，運用虛擬團隊之合作方式確有其需求，然其必要性與效益則應考量三個主要面向:(一團隊建置的目標與成員架構，(二團隊連結所使用的工具與溝通資訊，以及(三團隊設計任務與虛擬合作流程。The development of information communication technology enables a digital media design project to apply virtual team to its communication strategy and procedure for collaboration. This study discussed the needs for building a virtual team of digital media design, and how it works. The researchers explored 4 cases to examine the problems faced by each team in the design process. The findings of this study showed that the concept of the virtual team applied to the digital media design is valid and effective. However, a successful virtual teamwork requires the following conditions: 1. welldefined team target and healthy member structure; 2. proper communication tools and design information; and 3. the well-organized procedure for collaboration.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
7 CFR 51.577 - Average midrib length.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...
7 CFR 760.640 - National average market price.
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...
Chen, Zengbao; Chen, Xiaohong; Wang, Yanghua; Li, Jingye
2014-02-01
Reliable Q estimation is desirable for model-based inverse Q filtering to improve seismic resolution. On the one hand, conventional methods estimate Q from the amplitude spectra or frequency variations of individual wavelets at different depth (or time) levels, which is vulnerable to the effects of spectral interference and ambient noise. On the other hand, most inverse Q filtering algorithms are sensitive to noise, in order not to boost them, sometimes at the expense of degrading compensation effect. In this paper, the average-Q values are obtained from reflection seismic data based on the Gabor transform spectrum of a seismic trace. We transform the 2-D time-variant frequency spectrum into the 1-D spectrum, and then estimate the average-Q values based on the amplitude attenuation and compensation functions, respectively. Driven by the estimated average-Q model, we also develop a modified inverse Q filtering algorithm by incorporating a time-variant bandpass filter (TVBF), whose high cut off frequency follows a hyperbola along the traveltime from a specified time. Finally, we test this modified inverse Q filtering algorithm on synthetic data and perform the Q estimation procedure on a real reflection seismic data, followed by applying the modified inverse Q filtering algorithm. The synthetic data test and the real data example demonstrate that the algorithm driven by average-Q model may enhance the seismic resolution, without degrading the signal-to-noise ratio.
Kinetic energy equations for the average-passage equation system
Johnson, Richard W.; Adamczyk, John J.
1989-01-01
Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.
A. A. Burbelko; J. Początek; M. Królikowski
2013-01-01
The study presents a mathematical model of the crystallisation of nodular graphite cast iron. The proposed model is based on micro- andmacromodels, in which heat flow is analysed at the macro level, while micro level is used for modelling of the diffusion of elements. The use of elementary diffusion field in the shape of an averaged Voronoi polyhedron [AVP] was proposed. To determine the geometry of the averaged Voronoi polyhedron, Kolmogorov statistical theory of crystallisation was applied....
Conflict among Testing Procedures?
1982-04-01
AM4ONG TESTING PROCEDURES? Daniel F . Kohler April 1982 ( i’ 4:3 rpis tsnlb u lailtsd P-6765 8 8 O1 V 068 The Rand Paper Series Papers are issued by...TESTING PROCEDURES? Daniel F . Kohler April 1982 : i ! ,I I CONFLICT AMONG TESTING PROCEDURES? 1. Introduction "- Savin [1976] and Berndt and Savin [19771
The average crossing number of equilateral random polygons
Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.
2003-11-01
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.
JanjiĆ, NataŠa J; Kapor, Darko V; Doder, Dragan V; Doder, Radoslava Z; SaviĆ, Biljana V
2014-12-01
Temporal patterns of running velocity is of profound interest for coaches and researchers involved in sprint racing. In this study, we applied a nonhomogeneous differential equation for the motion with resistance force proportional to the velocity for the determination of the instantaneous velocity and instantaneous and average acceleration in the sprinter discipline at 100 m. Results obtained for the instantaneous velocity in this study using the presented model indicate good agreement with values measured directly, which is a good verification of the proposed procedure. To perform a comprehensive analysis of the applicability of the results obtained, the harmonic canon of running for the 100-m sprint discipline was formed. Using the data obtained by the measurement of split times for segments of 100-m run of the sprinters K. Lewis (1988), M. Green (2001), and U. Bolt (2009), the method described yielded results that enable comparative analysis of the kinematical parameters for each sprinter. Further treatment allowed the derivation of the ideal harmonic velocity canon of running, which can be helpful to any coach in evaluating the results achieved at particular distances in this and other disciplines. The method described can be applied for the analysis of any race.
Averaging Tesseral Effects: Closed Form Relegation versus Expansions of Elliptic Motion
Directory of Open Access Journals (Sweden)
Martin Lara
2013-01-01
Full Text Available Longitude-dependent terms of the geopotential cause nonnegligible short-period effects in orbit propagation of artificial satellites. Hence, accurate analytical and semianalytical theories must cope with tesseral harmonics. Modern algorithms for dealing analytically with them allow for closed form relegation. Nevertheless, current procedures for the relegation of tesseral effects from subsynchronous orbits are unavoidably related to orbit eccentricity, a key fact that is not enough emphasized and constrains application of this technique to small and moderate eccentricities. Comparisons with averaging procedures based on classical expansions of elliptic motion are carried out, and the pros and cons of each approach are discussed.
Readability of Special Education Procedural Safeguards
Mandic, Carmen Gomez; Rudd, Rima; Hehir, Thomas; Acevedo-Garcia, Dolores
2012-01-01
This study focused on literacy-related barriers to understanding the rights of students with disabilities and their parents within the special education system. SMOG readability scores were determined for procedural safeguards documents issued by all state departments of education. The average reading grade level was 16; 6% scored in the high…
Relationships between feeding behavior and average daily gain in cattle
Directory of Open Access Journals (Sweden)
Bruno Fagundes Cunha Lage
2013-12-01
Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (
Attenuation correction effects on SPECT/CT procedures: phantoms studies.
Oliveira, M L; Seren, M E G; Rocha, F C; Brunetto, S Q; Ramos, C D; Button, V L S N
2013-01-01
Attenuation correction is widely used in SPECT/CT (Single Photon Emission Computed Tomography) procedures, especially for imaging of the thorax region. Different compensation methods have been developed and introduced into clinical practice. Most of them use attenuation maps obtained using transmission scanning systems. However, this gives extra dose of radiation to the patient. The purpose of this study was to identify when attenuation correction is really important during SPECT/CT procedures.For this purpose, we used Jaszczak phantom and phantom with three line sources, filled with technetium ((99m)-Tc), with scattering materials, like air, water and acrylic, in different detectors configurations. In all images acquired were applied analytic and iterative reconstruction algorithms; the last one with or without attenuation correction. We analyzed parameters such as eccentricity, contrast and spatial resolution in the images.The best reconstruction algorithm on average was iterative, for images with 128 × 128 and 64 × 64 matrixes. The analytical algorithm was effective only to improve eccentricity in 64 × 64 matrix and matrix in contrast 128 × 128 with low statistics. Turning to the clinical routine examinations, on average, for 128 × 128 matrix and low statistics counting, the best algorithm was the iterative, without attenuation correction,improving in 150% the three parameters analyzed and, for the same matrix size, but with high statistical counting, iterative algorithm with attenuation correction was 25% better than that without correction. We can conclude that using the iterative algorithm with attenuation correction in the water, and its extra dose given, is not justified for the procedures of low statistic counting, being relevant only if the intention is to prioritize contrast in acquisitions with high statistic counting.
Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...
Average annual runoff in the United States, 1951-80
U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States
Average American 15 Pounds Heavier Than 20 Years Ago
... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...
Analytic continuation by averaging Padé approximants
Schött, Johan; Locht, Inka L. M.; Lundin, Elin; Grânäs, Oscar; Eriksson, Olle; Di Marco, Igor
2016-02-01
The ill-posed analytic continuation problem for Green's functions and self-energies is investigated by revisiting the Padé approximants technique. We propose to remedy the well-known problems of the Padé approximants by performing an average of several continuations, obtained by varying the number of fitted input points and Padé coefficients independently. The suggested approach is then applied to several test cases, including Sm and Pr atomic self-energies, the Green's functions of the Hubbard model for a Bethe lattice and of the Haldane model for a nanoribbon, as well as two special test functions. The sensitivity to numerical noise and the dependence on the precision of the numerical libraries are analyzed in detail. The present approach is compared to a number of other techniques, i.e., the nonnegative least-squares method, the nonnegative Tikhonov method, and the maximum entropy method, and is shown to perform well for the chosen test cases. This conclusion holds even when the noise on the input data is increased to reach values typical for quantum Monte Carlo simulations. The ability of the algorithm to resolve fine structures is finally illustrated for two relevant test functions.
Trait valence and the better-than-average effect.
Gold, Ron S; Brown, Mark G
2011-12-01
People tend to regard themselves as having superior personality traits compared to their average peer. To test whether this "better-than-average effect" varies with trait valence, participants (N = 154 students) rated both themselves and the average student on traits constituting either positive or negative poles of five trait dimensions. In each case, the better-than-average effect was found, but trait valence had no effect. Results were discussed in terms of Kahneman and Tversky's prospect theory.
Investigating Averaging Effect by Using Three Dimension Spectrum
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.
Average of Distribution and Remarks on Box-Splines
Institute of Scientific and Technical Information of China (English)
LI Yue-sheng
2001-01-01
A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.
Averaging and Globalising Quotients of Informetric and Scientometric Data.
Egghe, Leo; Rousseau, Ronald
1996-01-01
Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…
76 FR 6161 - Annual Determination of Average Cost of Incarceration
2011-02-03
... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...
20 CFR 226.62 - Computing average monthly compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...
Spectral averaging techniques for Jacobi matrices with matrix entries
Sadel, Christian
2009-01-01
A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.
27 CFR 19.37 - Average effective tax rate.
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...
7 CFR 1410.44 - Average adjusted gross income.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....
34 CFR 668.196 - Average rates appeals.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
34 CFR 668.215 - Average rates appeals.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...
7 CFR 51.2548 - Average moisture content determination.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...
Vygotsky in applied neuropsychology
Directory of Open Access Journals (Sweden)
Glozman J. M.
2016-12-01
Full Text Available The aims of this paper are: 1 to show the role of clinical experience for the theoretical contributions of L.S. Vygotsky, and 2 to analyze the development of these theories in contemporary applied neuropsychology. An analysis of disturbances of mental functioning is impossible without a systemic approach to the evidence observed. Therefore, medical psychology is fundamental for forming a systemic approach to psychology. The assessment of neurological patients at the neurological hospital of Moscow University permitted L.S. Vygotsky to create, in collaboration with A.R. Luria, the theory of systemic dynamic localization of higher mental functions and their relationship to cultural conditions. In his studies of patients with Parkinson’s disease, Vygotsky also set out 3 steps of systemic development: interpsychological, then extrapsychological, then intrapsychological. L.S. Vygotsky and A.R. Luria in the late 1920s created a program to compensate for the motor subcortical disturbances in Parkinson’s disease (PD through a cortical (visual mediation of movements. We propose to distinguish the objective mediating factors — like teaching techniques and modalities — from subjective mediating factors, like the individual’s internal representation of his/her own disease. The cultural-historical approach in contemporary neuropsychology forces neuropsychologists to re-analyze and re-interpret the classic neuropsychological syndromes; to develop new assessment procedures more in accordance with the patient’s conditions of life; and to reconsider the concept of the social brain as a social and cultural determinant and regulator of brain functioning. L.S. Vygotsky and A.R. Luria proved that a defect interferes with a child’s appropriation of his/her culture, but cultural means can help the child overcome the defect. In this way, the cultural-historical approach became, and still is, a methodological basis for remedial education.
40 CFR 98.385 - Procedures for estimating missing data.
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. You must follow the procedures for estimating missing data in § 98... estimating missing data for petroleum products in § 98.395 also applies to coal-to-liquid products....
40 CFR 401.13 - Test procedures for measurement.
2010-07-01
... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Test procedures for measurement. 401.13... AND STANDARDS GENERAL PROVISIONS § 401.13 Test procedures for measurement. The test procedures for measurement which are prescribed at part 136 of this chapter shall apply to expressions of pollutant...
Applied large eddy simulation.
Tucker, Paul G; Lardeau, Sylvain
2009-07-28
Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity.
22 CFR 216.2 - Applicability of procedures.
2010-04-01
...) Institution building grants to research and educational institutions in the United States such as those... Applicability of procedures. (a) Scope. Except as provided in § 216.2(b), these procedures apply to all new... exclusions. (1) The following criteria have been applied in determining the classes of actions including...
Bathe, Klaus-Jürgen
2015-01-01
Finite element procedures are now an important and frequently indispensable part of engineering analyses and scientific investigations. This book focuses on finite element procedures that are very useful and are widely employed. Formulations for the linear and nonlinear analyses of solids and structures, fluids, and multiphysics problems are presented, appropriate finite elements are discussed, and solution techniques for the governing finite element equations are given. The book presents general, reliable, and effective procedures that are fundamental and can be expected to be in use for a long time. The given procedures form also the foundations of recent developments in the field.
2010-11-15
...; Withdrawal of Determination of Average Manufacturer Price, Multiple Source Drug Definition, and Upper Limits... ``Definitions'' was intended to apply to both AMP and best price calculations. While the Determination of AMP... Price (Sec. 447.505). Therefore, we see no need to withdraw the definition of bona fide service fees....
Compositional dependences of average positron lifetime in binary As-S/Se glasses
Energy Technology Data Exchange (ETDEWEB)
Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)
2012-02-15
Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.
Average optimization of the approximate solution of operator equations and its application
Institute of Scientific and Technical Information of China (English)
WANG; xinghua(王兴华); MA; Wan(马万)
2002-01-01
In this paper, a definition of the optimization of operator equations in the average case setting is given. And the general result (Theorem 1) about the relevant optimization problem is obtained. This result is applied to the optimization of approximate solution of some classes of integral equations.
Actuator disk model of wind farms based on the rotor average wind speed
DEFF Research Database (Denmark)
Han, Xing Xing; Xu, Chang; Liu, De You;
2016-01-01
Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...
Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto
2014-02-15
A polarimetric method for the measurement of linear retardance in the presence of phase fluctuations is presented. This can be applied to electro-optic devices behaving as variable linear retarders. The method is based on an extended Mueller matrix model for the linear retarder containing the time-averaged effects of the instabilities. As a result, an averaged Stokes polarimetry technique is proposed to characterize both the retardance and its flicker magnitude. Predictive capability of the approach is experimentally demonstrated, validating the model and the calibration technique. The approach is applied to liquid crystal on silicon displays (LCoS) using a commercial Stokes polarimeter. Both the magnitude of the average retardance and the amplitude of its fluctuation are obtained for each gray level value addressed, thus enabling a complete phase characterization of the LCoS.
How ants use quorum sensing to estimate the average quality of a fluctuating resource.
Franks, Nigel R; Stuttard, Jonathan P; Doran, Carolina; Esposito, Julian C; Master, Maximillian C; Sendova-Franks, Ana B; Masuda, Naoki; Britton, Nicholas F
2015-07-08
We show that one of the advantages of quorum-based decision-making is an ability to estimate the average value of a resource that fluctuates in quality. By using a quorum threshold, namely the number of ants within a new nest site, to determine their choice, the ants are in effect voting with their feet. Our results show that such quorum sensing is compatible with homogenization theory such that the average value of a new nest site is determined by ants accumulating within it when the nest site is of high quality and leaving when it is poor. Hence, the ants can estimate a surprisingly accurate running average quality of a complex resource through the use of extraordinarily simple procedures.
Applied survival analysis using R
Moore, Dirk F
2016-01-01
Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...
Cardiac Procedures and Surgeries
... Peripheral Artery Disease Venous Thromboembolism Aortic Aneurysm More Cardiac Procedures and Surgeries Updated:Sep 16,2016 If you've had ... degree of coronary artery disease (CAD) you have. Cardiac Procedures and Surgeries Angioplasty Also known as Percutaneous Coronary Interventions [PCI], ...
DEFF Research Database (Denmark)
Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher
2009-01-01
It is a persistent finding in psychology and experimental economics that people's behavior is not only shaped by outcomes but also by decision-making procedures. In this paper we develop a general framework capable of modelling these procedural concerns. Within the context of psychological games we...
DEFF Research Database (Denmark)
Werlauff, Erik
The book contains an up-to-date survey of Danish civil procedure after the profound Danish procedural reforms in 2007. It deals with questions concerning competence and function of Danish courts, commencement and preparation of civil cases, questions of evidence and burden of proof, international...
Vibrational resonance: a study with high-order word-series averaging
Murua, Ander
2016-01-01
We study a model problem describing vibrational resonance by means of a high-order averaging technique based on so-called word series. With the tech- nique applied here, the tasks of constructing the averaged system and the associ- ated change of variables are divided into two parts. It is first necessary to build recursively a set of so-called word basis functions and, after that, all the required manipulations involve only scalar coefficients that are computed by means of sim- ple recursions. As distinct from the situation with other approaches, with word- series, high-order averaged systems may be derived without having to compute the associated change of variables. In the system considered here, the construction of high-order averaged systems makes it possible to obtain very precise approxima- tions to the true dynamics.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
12 CFR 309.5 - Procedures for requesting records.
2010-01-01
... average rate for central processing unit operating costs and the operator's basic rate of pay plus 16... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Procedures for requesting records. 309.5 Section 309.5 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION PROCEDURE AND RULES OF...
Alexander, Jeremy P; Hopkinson, Trent L; Wundersitz, Daniel W T; Serpell, Benjamin G; Mara, Jocelyn K; Ball, Nick B
2016-11-01
Alexander, JP, Hopkinson, TL, Wundersitz, DWT, Serpell, BG, Mara, JK, and Ball, NB. Validity of a wearable accelerometer device to measure average acceleration values during high-speed running. J Strength Cond Res 30(11): 3007-3013, 2016-The aim of this study was to determine the validity of an accelerometer to measure average acceleration values during high-speed running. Thirteen subjects performed three sprint efforts over a 40-m distance (n = 39). Acceleration was measured using a 100-Hz triaxial accelerometer integrated within a wearable tracking device (SPI-HPU; GPSports). To provide a concurrent measure of acceleration, timing gates were positioned at 10-m intervals (0-40 m). Accelerometer data collected during 0-10 m and 10-20 m provided a measure of average acceleration values. Accelerometer data was recorded as the raw output and filtered by applying a 3-point moving average and a 10-point moving average. The accelerometer could not measure average acceleration values during high-speed running. The accelerometer significantly overestimated average acceleration values during both 0-10 m and 10-20 m, regardless of the data filtering technique (p < 0.001). Body mass significantly affected all accelerometer variables (p < 0.10, partial η = 0.091-0.219). Body mass and the absence of a gravity compensation formula affect the accuracy and practicality of accelerometers. Until GPSports-integrated accelerometers incorporate a gravity compensation formula, the usefulness of any accelerometer-derived algorithms is questionable.
Averaging VMAT treatment plans for multi-criteria navigation
Craft, David; Unkelbach, Jan
2013-01-01
The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.
Averaging and exact perturbations in LTB dust models
Sussman, Roberto A
2012-01-01
We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...
Measuring temperature rise during orthopaedic surgical procedures.
Manoogian, Sarah; Lee, Adam K; Widmaier, James C
2016-09-01
A reliable means for measuring temperatures generated during surgical procedures is needed to recommend best practices for inserting fixation devices and minimizing the risk of osteonecrosis. Twenty four screw tests for three surgical procedures were conducted using the four thermocouples in the bone and one thermocouple in the screw. The maximum temperature rise recorded from the thermocouple in the screw (92.7±8.9°C, 158.7±20.9°C, 204.4±35.2°C) was consistently higher than the average temperature rise recorded in the bone (31.8±9.3°C, 44.9±12.4°C, 77.3±12.7°C). The same overall trend between the temperatures that resulted from three screw insertion procedures was recorded with significant statistical analyses using either the thermocouple in the screw or the average of several in-bone thermocouples. Placing a single thermocouple in the bone was determined to have limitations in accurately comparing temperatures from different external fixation screw insertion procedures. Using the preferred measurement techniques, a standard screw with a predrilled hole was found to have the lowest maximum temperatures for the shortest duration compared to the other two insertion procedures. Future studies evaluating bone temperature increase need to use reliable temperature measurements for recommending best practices to surgeons.
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) MAY 2014 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...REPORT 3. DATES COVERED (From - To) OCT 2011 – OCT 2013 4. TITLE AND SUBTITLE AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) 5a...precision parameter from the joint probability of the code matrix. For a full loaded CDMA signal, the average likelihood depends exclusively on feature
Distributed Weighted Parameter Averaging for SVM Training on Big Data
Das, Ayan; Bhattacharya, Sourangshu
2015-01-01
Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...
On the average crosscap number Ⅱ: Bounds for a graph
Institute of Scientific and Technical Information of China (English)
Yi-chao CHEN; Yan-pei LIU
2007-01-01
The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.
On the average crosscap number II: Bounds for a graph
Institute of Scientific and Technical Information of China (English)
2007-01-01
The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.
A Comparison of Error Correction Procedures on Word Reading
Syrek, Andrea L.; Hixson, Micheal D.; Jacob, Susan; Morgan, Sandra
2007-01-01
The effectiveness and efficiency of two error correction procedures on word reading were compared. Three students with below average reading skills and one student with average reading skills were provided with weekly instruction on sets of 20 unknown words. Students' errors during instruction were followed by either word supply error correction…
Practical definition of averages of tensors in general relativity
Boero, Ezequiel F
2016-01-01
We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
DEFF Research Database (Denmark)
Hammar, Emil
Through the theories of play by Gadamer (2004) and Henricks (2006), I will show how the relationship between play and game can be understood as dialectic and disruptive, thus challenging understandings of how the procedures of games determine player activity and vice versa. As such, I posit some...... analytical consequences for understandings of digital games as procedurally fixed (Boghost, 2006; Flannagan, 2009; Bathwaite & Sharp, 2010). That is, if digital games are argued to be procedurally fixed and if play is an appropriative and dialectic activity, then it could be argued that the latter affects...
Variational theory of average-atom and superconfigurations in quantum plasmas.
Blenski, T; Cichocki, B
2007-05-01
Models of screened ions in equilibrium plasmas with all quantum electrons are important in opacity and equation of state calculations. Although such models have to be derived from variational principles, up to now existing models have not been fully variational. In this paper a fully variational theory respecting virial theorem is proposed-all variables are variational except the parameters defining the equilibrium, i.e., the temperature T, the ion density ni and the atomic number Z. The theory is applied to the quasiclassical Thomas-Fermi (TF) atom, the quantum average atom (QAA), and the superconfigurations (SC) in plasmas. Both the self-consistent-field (SCF) equations for the electronic structure and the condition for the mean ionization Z* are found from minimization of a thermodynamic potential. This potential is constructed using the cluster expansion of the plasma free energy from which the zero and the first-order terms are retained. In the zero order the free energy per ion is that of the quantum homogeneous plasma of an unknown free-electron density n0 = Z* ni occupying the volume 1/ni. In the first order, ions submerged in this plasma are considered and local neutrality is assumed. These ions are considered in the infinite space without imposing the neutrality of the Wigner-Seitz (WS) cell. As in the Inferno model, a central cavity of a radius R is introduced, however, the value of R is unknown a priori. The charge density due to noncentral ions is zero inside the cavity and equals en0 outside. The first-order contribution to free energy per ion is the difference between the free energy of the system "central ion+infinite plasma" and the free energy of the system "infinite plasma." An important part of the approach is an "ionization model" (IM), which is a relation between the mean ionization charge Z* and the first-order structure variables. Both the IM and the local neutrality are respected in the minimization procedure. The correct IM in the TF case
Hageman, Louis A
2004-01-01
This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a "red/black" block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and comp
A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport
Directory of Open Access Journals (Sweden)
Gilberto Espinosa-Paredes
2012-01-01
Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.
Essays in Applied Microeconomics
Severnini, Edson Roberto
This dissertation consists of three studies analyzing causes and consequences of location decisions by economic agents in the U.S. In Chapter 1, I address the longstanding question of the extent to which the geographic clustering of economic activity may be attributable to agglomeration spillovers as opposed to natural advantages. I present evidence on this question using data on the long-run effects of large scale hydroelectric dams built in the U.S. over the 20th century, obtained through a unique comparison between counties with or without dams but with similar hydropower potential. Until mid-century, the availability of cheap local power from hydroelectric dams conveyed an important advantage that attracted industry and population. By the 1950s, however, these advantages were attenuated by improvements in the efficiency of thermal power generation and the advent of high tension transmission lines. Using a novel combination of synthetic control methods and event-study techniques, I show that, on average, dams built before 1950 had substantial short run effects on local population and employment growth, whereas those built after 1950 had no such effects. Moreover, the impact of pre-1950 dams persisted and continued to grow after the advantages of cheap local hydroelectricity were attenuated, suggesting the presence of important agglomeration spillovers. Over a 50 year horizon, I estimate that at least one half of the long run effect of pre-1950 dams is due to spillovers. The estimated short and long run effects are highly robust to alternative procedures for selecting synthetic controls, to controls for confounding factors such as proximity to transportation networks, and to alternative sample restrictions, such as dropping dams built by the Tennessee Valley Authority or removing control counties with environmental regulations. I also find small local agglomeration effects from smaller dam projects, and small spillovers to nearby locations from large dams. Lastly
Analytic computation of average energy of neutrons inducing fission
Energy Technology Data Exchange (ETDEWEB)
Clark, Alexander Rich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-12
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
Averaged EMG profiles in jogging and running at different speeds
Gazendam, Marnix G. J.; Hof, At L.
2007-01-01
EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
A Simple Geometrical Derivation of the Spatial Averaging Theorem.
Whitaker, Stephen
1985-01-01
The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
On the Average-Case Complexity of Shellsort
Vitányi, P.M.B.
2015-01-01
We prove a lower bound expressed in the increment sequence on the average-case complexity (number of inversions which is proportional to the running time) of Shellsort. This lower bound is sharp in every case where it could be checked. We obtain new results e.g. determining the average-case complexi
Charging for computer usage with average cost pricing
Landau, K
1973-01-01
This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).
Average widths of anisotropic Besov-Wiener classes
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p＜∞). The weak asymptotic behavior is established for the corresponding quantities.
7 CFR 701.17 - Average adjusted gross income limitation.
2010-01-01
... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...
(Average-) convexity of common pool and oligopoly TU-games
Driessen, T.S.H.; Meinhardt, H.
2000-01-01
The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele
Average widths of anisotropic Besov-Wiener classes
Institute of Scientific and Technical Information of China (English)
蒋艳杰
2000-01-01
This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.
Remarks on the Lower Bounds for the Average Genus
Institute of Scientific and Technical Information of China (English)
Yi-chao Chen
2011-01-01
Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.
Delineating the Average Rate of Change in Longitudinal Models
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
2016-01-01
The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years and then removed. This procedure significantly improves quality of life and, in most cases, also improves cardiac performance. Previously, the modified Ravitch procedure was used with resection of cartilage and the use of posterior support. This article details the new modified Nuss procedure, which requires the use of shorter bars than specified by the original technique. This technique facilitates the operation as the bar may be guided manually through the chest wall and no additional stabilizing sutures are necessary. PMID:27747185
Hemodialysis access procedures
... this page: //medlineplus.gov/ency/article/007641.htm Hemodialysis access procedures To use the sharing features on ... An access is needed for you to get hemodialysis. The access is where you receive hemodialysis . Using ...
Canalith Repositioning Procedure
... repositioning procedure can help relieve benign paroxysmal positional vertigo (BPPV), a condition in which you have brief, ... dizziness that occur when you move your head. Vertigo usually comes from a problem with the part ...
... does it cost? As a rule, almost all cosmetic surgery is considered “elective” and is not typically covered ... premier specialty group representing dermatologists performing all procedures – cosmetic, general, ... Reserved. / Disclaimer / Terms of Use / ...
Procedures for Sampling Vegetation
US Fish and Wildlife Service, Department of the Interior — This report outlines vegetation sampling procedures used on various refuges in Region 3. The importance of sampling the response of marsh vegetation to management...
Apply the Communicative Approach in Listening Class
Institute of Scientific and Technical Information of China (English)
Wang; changxue; Su; na
2014-01-01
Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’ communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.
Apply the Communicative Approach in Listening Class
Institute of Scientific and Technical Information of China (English)
Wang changxue; Su na
2014-01-01
Speaking and listening are the two obstacles in the process of our learning and they are also the most important abilities that we should possess. Communicative approach aims to the ability of learners’communicative competence, thus apply the communicative approach in listening class is an effective way in English teaching procedure.
A Primer on Disseminating Applied Quantitative Research
Bell, Bethany A.; DiStefano, Christine; Morgan, Grant B.
2010-01-01
Transparency and replication are essential features of scientific inquiry, yet scientific communications of applied quantitative research are often lacking in much-needed procedural information. In an effort to promote researchers dissemination of their quantitative studies in a cohesive, detailed, and informative manner, the authors delineate…
Average cross-responses in correlated financial markets
Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas
2016-09-01
There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Energy Technology Data Exchange (ETDEWEB)
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
12 CFR 268.710 - Compliance procedures.
2010-01-01
... Women's Program Manager, the Hispanic Employment Program Coordinator, or the People with Disabilities... Because of Physical or Mental Disability § 268.710 Compliance procedures. (a) Applicability. Except as..., applies to all allegations of discrimination on the basis of a disability in programs or...
47 CFR 13.209 - Examination procedures.
2010-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL RADIO OPERATORS Examination System § 13.209 Examination procedures. (a) Each examination for a commercial radio operator license must be administered at a... eligible to apply for any commercial radio operator license shall, by reason of any physical handicap,...
Revisiting and Refining the Multicultural Assessment Procedure.
Ridley, Charles R.; Hill, Carrie L.; Li, Lisa C.
1998-01-01
Reacts to critiques of the Multicultural Assessment Procedure (MAP). Discusses the definition of culture, the structure of the MAP, cultural versus idiosyncratic data, counselors' knowledge and characteristics, soliciting client feedback and perceptions, and managed care. Encourages colleagues to apply the MAP to their research, practice, and…
Anisotropy of the solar network magnetic field around the average supergranule
Langfellner, J; Birch, A C
2015-01-01
Supergranules in the quiet Sun are outlined by a web-like structure of enhanced magnetic field strength, the so-called magnetic network. We aim to map the magnetic network field around the average supergranule near disk center. We use observations of the line-of-sight component of the magnetic field from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). The average supergranule is constructed by coaligning and averaging over 3000 individual supergranules. We determine the positions of the supergranules with an image segmentation algorithm that we apply on maps of the horizontal flow divergence measured using time-distance helioseismology. In the center of the average supergranule the magnetic (intranetwork) field is weaker by about 2.2 Gauss than the background value (3.5 Gauss), whereas it is enhanced in the surrounding ring of horizontal inflows (by about 0.6 Gauss on average). We find that this network field is significantly stronger west (prograde) of the average sup...
Hänsel, Mike; Klupp, S; Graupner, Anke; Dieter, Peter; Koch, Thea
2010-01-01
Since 2004 German universities have been able to use a selection procedure to admit up to 60 percent of new students. In 2005, the Carl Gustav Carus Faculty of Medicine at Dresden introduced a new admission procedure. In order to take account of cognitive as well as non-cognitive competencies the Faculty used the following selection criteria based on the legal regulations for university-admissions:the grade point average of the school-leaving exam (SSC, Abitur), marks in relevant school subjects; profession and work experience; premedical education; and a structured interview. In order to evaluate the effects of the Faculty admission procedures applied in the years 2005, 2006 and 2007, the results on the First National Medical Examination (FNME) were compared between the candidates selected by the Faculty procedures (CSF-group) and the group of candidates admitted by the Central Office for the Allocation of Places in Higher Education (the ZVS group, comprising the subgroups: ZVS best, ZVS rest and ZVS total). The rates of participation in the FNME within the required minimum time of 2 years of medical studies were higher in the CSF group compared to the ZVS-total group. The FNME pass rates were lowest in the ZVS rest group and highest in the ZVS best group. The ZVS best group and the ZVS total group showed the best FMNE results, whereas the results of the CSF-group were equal or worse compared to the ZVS rest group. No correlation was found between the interview results and the FNME results. According to studies of the prognostic value of various selection instruments, the school leaving grade point average seems the best predictor of success on the FNME. In order to validate the non-cognitive selection instruments of the Faculty procedure, complementary instruments are needed to measure non-cognitive aspects that are not captured by the FNME-results.
Exploring the Best Classification from Average Feature Combination
Directory of Open Access Journals (Sweden)
Jian Hou
2014-01-01
Full Text Available Feature combination is a powerful approach to improve object classification performance. While various combination algorithms have been proposed, average combination is almost always selected as the baseline algorithm to be compared with. In previous work we have found that it is better to use only a sample of the most powerful features in average combination than using all. In this paper, we continue this work and further show that the behaviors of features in average combination can be integrated into the k-Nearest-Neighbor (kNN framework. Based on the kNN framework, we then propose to use a selection based average combination algorithm to obtain the best classification performance from average combination. Our experiments on four diverse datasets indicate that this selection based average combination performs evidently better than the ordinary average combination, and thus serves as a better baseline. Comparing with this new and better baseline makes the claimed superiority of newly proposed combination algorithms more convincing. Furthermore, the kNN framework is helpful in understanding the underlying mechanism of feature combination and motivating novel feature combination algorithms.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
DEFF Research Database (Denmark)
Najafi, Nadia; Panah, Mohammad Esmail Aryaee; Schmidt Paulsen, Uwe
2015-01-01
. In the SSI-cov method, a block Toeplitz matrix is formed which contains output correlation functions. 10 displacement time series have been recorded with 187 Hz sampling rate, and about 3 time series were chosen to be analyzed. The block Toeplitz matrix of 3 time series are averaged out and the procedure how...
Average Case Analysis of an Adjacency Map Searching Technique.
1981-12-01
next section (see Figure 2.2 (c)). A pidgin algol program for procedure SEARCHTREE is given in Table 1. Q Q 1 1 Q29 U, Z are queues; s Q and Q s denote...rectangle to which they belong. In a second scan, a list Pi of sorted points is easily built for I° Table 1. A pidgin algol program for procedure
Average-Case Analysis of Algorithms Using Kolmogorov Complexity
Institute of Scientific and Technical Information of China (English)
姜涛; 李明
2000-01-01
Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.
Unbiased Cultural Transmission in Time-Averaged Archaeological Assemblages
Madsen, Mark E
2012-01-01
Unbiased models are foundational in the archaeological study of cultural transmission. Applications have as- sumed that archaeological data represent synchronic samples, despite the accretional nature of the archaeological record. I document the circumstances under which time-averaging alters the distribution of model predictions. Richness is inflated in long-duration assemblages, and evenness is "flattened" compared to unaveraged samples. Tests of neutrality, employed to differentiate biased and unbiased models, suffer serious problems with Type I error under time-averaging. Finally, the time-scale over which time-averaging alters predictions is determined by the mean trait lifetime, providing a way to evaluate the impact of these effects upon archaeological samples.
Scalable Robust Principal Component Analysis Using Grassmann Averages
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi
2016-01-01
. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We...... Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video...
Sample Selected Averaging Method for Analyzing the Event Related Potential
Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki
The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.
Perspectives on Applied Ethics
2007-01-01
Applied ethics is a growing, interdisciplinary field dealing with ethical problems in different areas of society. It includes for instance social and political ethics, computer ethics, medical ethics, bioethics, envi-ronmental ethics, business ethics, and it also relates to different forms of professional ethics. From the perspective of ethics, applied ethics is a specialisation in one area of ethics. From the perspective of social practice applying eth-ics is to focus on ethical aspects and ...
2014-01-01
Advances in Applied Mechanics draws together recent significant advances in various topics in applied mechanics. Published since 1948, Advances in Applied Mechanics aims to provide authoritative review articles on topics in the mechanical sciences, primarily of interest to scientists and engineers working in the various branches of mechanics, but also of interest to the many who use the results of investigations in mechanics in various application areas, such as aerospace, chemical, civil, en...
Chen, Guan-Yu; Wu, Cheng-Chi; Shao, Hao-Chiang; Chang, Hsiu-Ming; Chiang, Ann-Shyn; Chen, Yung-Chang
2012-12-01
Model averaging is a widely used technique in biomedical applications. Two established model averaging methods, iterative shape averaging (ISA) method and virtual insect brain (VIB) method, have been applied to several organisms to generate average representations of their brain surfaces. However, without sufficient samples, some features of the average Drosophila brain surface obtained using the above methods may disappear or become distorted. To overcome this problem, we propose a Bézier-tube-based surface model averaging strategy. The proposed method first compensates for disparities in position, orientation, and dimension of input surfaces, and then evaluates the average surface by performing shape-based interpolation. Structural features with larger individual disparities are simplified with half-ellipse-shaped Bézier tubes, and are unified according to these tubes to avoid distortion during the averaging process. Experimental results show that the average model yielded by our method could preserve fine features and avoid structural distortions even if only a limit amount of input samples are used. Finally, we qualitatively compare our results with those obtained by ISA and VIB methods by measuring the surface-to-surface distances between input surfaces and the averaged ones. The comparisons show that the proposed method could generate a more representative average surface than both ISA and VIB methods.
Applied Neuroscience Laboratory Complex
Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment
Baurle, Robert A.; Edwards, Jack R.
2010-01-01
Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure
Mobile Energy Laboratory Procedures
Energy Technology Data Exchange (ETDEWEB)
Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.
1993-09-01
Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.
Model Averaging Software for Dichotomous Dose Response Risk Estimation
Directory of Open Access Journals (Sweden)
Matthew W. Wheeler
2008-02-01
Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, ﬁts the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulﬁlls a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.
Ensemble vs. time averages in financial time series analysis
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
On the average exponent of elliptic curves modulo $p$
Freiberg, Tristan
2012-01-01
Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.
United States Average Annual Precipitation, 1995-1999 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...
United States Average Annual Precipitation, 2005-2009 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...
United States Average Annual Precipitation, 1990-1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...
United States Average Annual Precipitation, 2000-2004 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
The Partial Averaging of Fuzzy Differential Inclusions on Finite Interval
Directory of Open Access Journals (Sweden)
Andrej V. Plotnikov
2014-01-01
Full Text Available The substantiation of a possibility of application of partial averaging method on finite interval for differential inclusions with the fuzzy right-hand side with a small parameter is considered.
United States Average Annual Precipitation, 1990-2009 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...
United States Average Annual Precipitation, 1961-1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
The average-shadowing property and topological ergodicity for flows
Energy Technology Data Exchange (ETDEWEB)
Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)]. E-mail: rbgu@njue.edu.cn; Guo Wenjing [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)
2005-07-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic.
On the average sensitivity of laced Boolean functions
jiyou, Li
2011-01-01
In this paper we obtain the average sensitivity of the laced Boolean functions. This confirms a conjecture of Shparlinski. We also compute the weights of the laced Boolean functions and show that they are almost balanced.
Does subduction zone magmatism produce average continental crust
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*
Onorante, Luca; Raftery, Adrian E.
2015-01-01
Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859
Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming
2017-04-01
In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.
Directory of Open Access Journals (Sweden)
P. Shanmugasundaram
2014-01-01
Full Text Available In this paper a revised Intuitionistic Fuzzy Max-Min Average Composition Method is proposed to construct the decision method for the selection of the professional students based on their skills by the recruiters using the operations of Intuitionistic Fuzzy Soft Matrices. In Shanmugasundaram et al. (2014, Intuitionistic Fuzzy Max-Min Average Composition Method was introduced and applied in Medical diagnosis problem. Sanchez’s approach (Sanchez (1979 for decision making is studied and the concept is modified for the application of Intuitionistic fuzzy soft set theory. Through a survey, the opportunities and selection of the students with the help of Intuitionistic fuzzy soft matrix operations along with Intuitionistic fuzzy max-min average composition method is discussed.
Optimum Power and Rate Allocation for Coded V-BLAST: Average Optimization
Kostina, Victoria
2010-01-01
An analytical framework for performance analysis and optimization of coded V-BLAST is developed. Average power and/or rate allocations to minimize the outage probability as well as their robustness and dual problems are investigated. Compact, closed-form expressions for the optimum allocations and corresponding system performance are given. The uniform power allocation is shown to be near optimum in the low outage regime in combination with the optimum rate allocation. The average rate allocation provides the largest performance improvement (extra diversity gain), and the average power allocation offers a modest SNR gain limited by the number of transmit antennas but does not increase the diversity gain. The dual problems are shown to have the same solutions as the primal ones. All these allocation strategies are shown to be robust. The reported results also apply to coded multiuser detection and channel equalization systems relying on successive interference cancelation.
Fr\\'echet means of curves for signal averaging and application to ECG data analysis
Bigot, Jérémie
2011-01-01
Signal averaging is the process that consists in computing a mean shape from a set of noisy signals. In the presence of geometric variability in time in the data, the usual Euclidean mean of the raw data yields a mean pattern that does not reflect the typical shape of the observed signals. In this setting, it is necessary to use alignment techniques for a precise synchronization of the signals, and then to average the aligned data to obtain a consistent mean shape. In this paper, we study the numerical performances of Fr\\'echet means of curves which are extensions of the usual Euclidean mean to spaces endowed with non-Euclidean metrics. This yields a new algorithm for signal averaging without a reference template. We apply this approach to the estimation of a mean heart cycle from ECG records.
Time Averaged Quantum Dynamics and the Validity of the Effective Hamiltonian Model
Gamel, Omar
2010-01-01
We develop a technique for finding the dynamical evolution in time of an averaged density matrix. The result is an equation of evolution that includes an Effective Hamiltonian, as well as decoherence terms in Lindblad form. Applying the general equation to harmonic Hamiltonians, we confirm a previous formula for the Effective Hamiltonian together with a new decoherence term which should in general be included, and whose vanishing provides the criteria for validity of the Effective Hamiltonian approach. Finally, we apply the theory to examples of the AC Stark Shift and Three- Level Raman Transitions, recovering a new decoherence effect in the latter.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Use of a Correlation Coefficient for Conditional Averaging.
1997-04-01
data. Selection of the sine function period and a correlation coefficient threshold are discussed. Also examined are the effects of the period and...threshold level on the number of ensembles captured for inclusion for conditional averaging. Both the selection of threshold correlation coefficient and the...A method of collecting ensembles for conditional averaging is presented that uses data collected from a plane mixing layer. The correlation
Estimation of annual average daily traffic with optimal adjustment factors
Alonso Oreña, Borja; Moura Berodia, José Luis; Ibeas Portilla, Ángel; Romero Junquera, Juan Pablo
2014-01-01
This study aimed to estimate the annual average daily traffic in inter-urban networks determining the best correlation (affinity) between the short period traffic counts and permanent traffic counters. A bi-level optimisation problem is proposed in which an agent in an upper level prefixes the affinities between short period traffic counts and permanent traffic counters stations and looks to minimise the annual average daily traffic calculation error while, in a lower level, an origin–destina...
Time averages, recurrence and transience in the stochastic replicator dynamics
Hofbauer, Josef; 10.1214/08-AAP577
2009-01-01
We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates time averages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.
On the relativistic mass function and averaging in cosmology
Ostrowski, Jan J; Roukema, Boudewijn F
2016-01-01
The general relativistic description of cosmological structure formation is an important challenge from both the theoretical and the numerical point of views. In this paper we present a brief prescription for a general relativistic treatment of structure formation and a resulting mass function on galaxy cluster scales in a highly generic scenario. To obtain this we use an exact scalar averaging scheme together with the relativistic generalization of Zel'dovich's approximation (RZA) that serves as a closure condition for the averaged equations.
Average life of oxygen vacancies of quartz in sediments
Institute of Scientific and Technical Information of China (English)
DIAO; Shaobo(刁少波); YE; Yuguang(业渝光)
2002-01-01
Average life of oxygen vacancies of quartz in sediments is estimated by using the ESR (electron spin resonance) signals of E( centers from the thermal activation technique. The experimental results show that the second-order kinetics equation is more applicable to the life estimation compared with the first order equation. The average life of oxygen vacancies of quartz from 4895 to 4908 deep sediments in the Tarim Basin is about 1018 a at 27℃.
Emotional Value of Applied Textiles
DEFF Research Database (Denmark)
Bang, Anne Louise
2011-01-01
The present PhD thesis is conducted as an Industrial PhD project in collaboration with the Danish company Gabriel A/S (Gabriel), which designs and produces furniture textiles and ‘related products’ for manufacturers of furniture. A ‘related textile product’ is e.g. processing of piece goods...... at Gabriel face while trying to implement an innovative and process-oriented business strategy. The focal point has been the section of the strategy which aims at developing Blue Ocean products, which have a functional and an emotional value for the user. The thesis examines and explores emotional value...... (procedures of user and stakeholder involvement). In the course of the thesis I explain and elaborate on four themes each of which contributes to the outcome of the project. 1) Creating a frame of reference for the textile design process and a systematic approach to applied textiles. In chapter three I...
Energy Technology Data Exchange (ETDEWEB)
Cresswell, A.J. [Scottish Universities Environmental Research Centre, Rankine Avenue, Scottish Enterprise Technology Park, East Kilbride, Glasgow G75 0QF (United Kingdom)], E-mail: a.cresswell@suerc.gla.ac.uk; Sanderson, D.C.W. [Scottish Universities Environmental Research Centre, Rankine Avenue, Scottish Enterprise Technology Park, East Kilbride, Glasgow G75 0QF (United Kingdom)
2009-08-21
The use of difference spectra, with a filtering of a rolling average background, as a variation of the more common rainbow plots to aid in the visual identification of radiation anomalies in mobile gamma spectrometry systems is presented. This method requires minimal assumptions about the radiation environment, and is not computationally intensive. Some case studies are presented to illustrate the method. It is shown that difference spectra produced in this manner can improve signal to background, estimate shielding or mass depth using scattered spectral components, and locate point sources. This approach could be a useful addition to the methods available for locating point sources and mapping dispersed activity in real time. Further possible developments of the procedure utilising more intelligent filters and spatial averaging of the background are identified.
Focused information criterion and model averaging based on weighted composite quantile regression
Xu, Ganggang
2013-08-13
We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..
Cycle Average Peak Fuel Temperature Prediction Using CAPP/GAMMA+
Energy Technology Data Exchange (ETDEWEB)
Tak, Nam-il; Lee, Hyun Chul; Lim, Hong Sik [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
In order to obtain a cycle average maximum fuel temperature without rigorous efforts, a neutronics/thermo-fluid coupled calculation is needed with depletion capability. Recently, a CAPP/GAMMA+ coupled code system has been developed and the initial core of PMR200 was analyzed using the CAPP/GAMMA+ code system. The GAMMA+ code is a system thermo-fluid analysis code and the CAPP code is a neutronics code. The General Atomics proposed that the design limit of the fuel temperature under normal operating conditions should be a cycle-averaged maximum value. Nonetheless, the existing works of Korea Atomic Energy Research Institute (KAERI) only calculated the maximum fuel temperature at a fixed time point, e.g., the beginning of cycle (BOC) just because the calculation capability was not ready for a cycle average value. In this work, a cycle average maximum fuel temperature has been calculated using CAPP/GAMMA+ code system for the equilibrium core of PMR200. The CAPP/GAMMA+ coupled calculation was carried out for the equilibrium core of PMR 200 from BOC to EOC to obtain a cycle average peak fuel temperature. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle average peak fuel temperature was calculated as 1181 .deg. C, which is below the design target of 1250 .deg. C.
Average and recommended half-life values for two neutrino double beta decay: upgrade'05
Barabash, A S
2006-01-01
All existing ``positive'' results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{150}$Nd, $^{150}$Nd - $^{150}$Sm ($0^+_1$) and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te, $^{130}$Te and $^{130}$Ba are proposed. We recommend the use of these results as presently the most precise and reliable values for half-lives.
Average and recommended half-life values for two neutrino double beta decay: upgrade-09
Barabash, A S
2009-01-01
All existing ``positive'' results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{130}$Te, $^{150}$Nd, $^{150}$Nd - $^{150}$Sm ($0^+_1$) and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te, $^{130}$Te and $^{130}$Ba are proposed. We recommend the use of these results as presently the most precise and reliable values for half-lives.
Average and recommended half-life values for two neutrino double beta decay: upgrade-2013
Barabash, A S
2013-01-01
All existing positive results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{96}$Zr, $^{100}$Mo, $^{100}$Mo - $^{100}$Ru ($0^+_1$), $^{116}$Cd, $^{130}$Te, $^{136}$Xe, $^{150}$Nd, $^{150}$Nd - $^{150}$Sm ($0^+_1$) and $^{238}$U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of $^{128}$Te and $^{130}$Ba are proposed. I recommend the use of these results as the most currently reliable values for half-lives.
Computing Depth-averaged Flows Using Boundary-fitted Coordinates and Staggered Grids
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A depth-averaged nonlinear k-ε model for turbulent flows in complex geometries has been developed in a boundary-fitted coordinate system. The SIMPLEC procedure is used to develop an economical discrete method for staggered grids to analyze flows in a 90° bend. This paper describes how to change a program in rectangular coordinate into a boundary-fitted coordinate. The results compare well with experimental data for flow in a meandering channel showing the efficiency of the model and the discrete method.
A self-organizing power system stabilizer using Fuzzy Auto-Regressive Moving Average (FARMA) model
Energy Technology Data Exchange (ETDEWEB)
Park, Y.M.; Moon, U.C. [Seoul National Univ. (Korea, Republic of). Electrical Engineering Dept.; Lee, K.Y. [Pennsylvania State Univ., University Park, PA (United States). Electrical Engineering Dept.
1996-06-01
This paper presents a self-organizing power system stabilizer (SOPSS) which use the Fuzzy Auto-Regressive Moving Average (FARMA) model. The control rules and the membership functions of the proposed logic controller are generated automatically without using any plant model. The generated rules are stored in the fuzzy rule space and updated on-line by a self-organizing procedure. To show the effectiveness of the proposed controller, comparison with a conventional controller for one-machine infinite-bus system is presented.
DEFF Research Database (Denmark)
Litvan, Héctor; Jensen, Erik W; Galan, Josefina;
2002-01-01
The extraction of the middle latency auditory evoked potentials (MLAEP) is usually done by moving time averaging (MTA) over many sweeps (often 250-1,000), which could produce a delay of more than 1 min. This problem was addressed by applying an autoregressive model with exogenous input (ARX......) that enables extraction of the auditory evoked potentials (AEP) within 15 sweeps. The objective of this study was to show that an AEP could be extracted faster by ARX than by MTA and with the same reliability....
Funamizu, Hideki; Shimoma, Shohei; Yuasa, Tomonori; Aizu, Yoshihisa
2014-10-20
We present the effects of spatiotemporal averaging processes on an estimation of spectral reflectance in color digital holography using speckle illuminations. In this technique, speckle fields emitted from a multimode fiber are used as both a reference wave and a wavefront illuminating an object. The interference patterns of two coherent waves for three wavelengths are recorded as digital holograms on a CCD camera. Speckle fields are changed by vibrating the multimode fiber using a vibrator, and a number of holograms are acquired to average reconstructed images. After performing an averaging process, which we refer to as a temporal averaging process in this study, using images reconstructed from multiple holograms, a spatial averaging process is applied using a smoothing window function. For the estimation of spectral reflectance in reconstructed images, we use the Wiener estimation method. The effects of the averaging processes on color reproducibility are evaluated by a chromaticity diagram, the root-mean-square error, and color differences.
Arianespace streamlines launch procedures
Lenorovitch, Jeffrey M.
1992-06-01
Ariane has entered a new operational phase in which launch procedures have been enhanced to reduce the length of launch campaigns, lower mission costs, and increase operational availability/flexibility of the three-stage vehicle. The V50 mission utilized the first vehicle from a 50-launcher production lot ordered by Arianespace, and was the initial flight with a stretched third stage that enhances Ariane's performance. New operational procedures were introduced gradually over more than a year, starting with the V42 launch in January 1991.
Procedure and Program Examples
Britz, Dieter
Here some modules, procedures and whole programs are described, that may be useful to the reader, as they have been, to the author. They are all in Fortran 90/95 and start with a generally useful module, that will be used in most procedures and programs in the examples, and another module useful for programs using a Rosenbrock variant. The source texts (except for the two modules) are not reproduced here, but can be downloaded from the web site www.springerlink.com/openurl.asp?genre=issue &issn=1616-6361&volume=666 (the two lines form one contiguous URL!).
Expansion and Growth of Structure Observables in a Macroscopic Gravity Averaged Universe
Wijenayake, Tharake
2015-01-01
We investigate the effect of averaging inhomogeneities on expansion and large-scale structure growth observables using the exact and covariant framework of Macroscopic Gravity (MG). It is well-known that applying the Einstein's equations and spatial averaging do not commute and lead to the averaging problem. For the MG formalism applied to the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric, this gives an extra dynamical term encapsulated as an averaging density parameter denoted $\\Omega_A$. An exact isotropic cosmological solution of MG for the flat FLRW metric is already known in the literature, we derive here an anisotropic exact solution. Using the isotropic solution, we compare the expansion history to current data of distances to supernovae, Baryon Acoustic Oscillations, CMB last scattering surface, and Hubble constant measurements, and find $-0.05 \\le \\Omega_A \\le 0.07$ (at the 95% CL). For the flat metric case this reduces to $-0.03 \\le \\Omega_A \\le 0.05$. We also find that the inclusion of this ter...
DEFF Research Database (Denmark)
Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel
In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle s...
Allhoff, Fritz
2011-03-01
This paper explores the relationships that various applied ethics bear to each other, both in particular disciplines and more generally. The introductory section lays out the challenge of coming up with such an account and, drawing a parallel with the philosophy of science, offers that applied ethics may either be unified or disunified. The second section develops one simple account through which applied ethics are unified, vis-à-vis ethical theory. However, this is not taken to be a satisfying answer, for reasons explained. In the third section, specific applied ethics are explored: biomedical ethics; business ethics; environmental ethics; and neuroethics. These are chosen not to be comprehensive, but rather for their traditions or other illustrative purposes. The final section draws together the results of the preceding analysis and defends a disunity conception of applied ethics.
Energy Technology Data Exchange (ETDEWEB)
Sabin, Charles; Plevka, Pavel, E-mail: pavel.plevka@ceitec.muni.cz [Central European Institute of Technology – Masaryk University, Kamenice 653/25, 625 00 Brno (Czech Republic)
2016-02-16
Molecular replacement and noncrystallographic symmetry averaging were used to detwin a data set affected by perfect hemihedral twinning. The noncrystallographic symmetry averaging of the electron-density map corrected errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. Hemihedral twinning is a crystal-growth anomaly in which a specimen is composed of two crystal domains that coincide with each other in three dimensions. However, the orientations of the crystal lattices in the two domains differ in a specific way. In diffraction data collected from hemihedrally twinned crystals, each observed intensity contains contributions from both of the domains. With perfect hemihedral twinning, the two domains have the same volumes and the observed intensities do not contain sufficient information to detwin the data. Here, the use of molecular replacement and of noncrystallographic symmetry (NCS) averaging to detwin a 2.1 Å resolution data set for Aichi virus 1 affected by perfect hemihedral twinning is described. The NCS averaging enabled the correction of errors in the detwinning introduced by the differences between the molecular-replacement model and the crystallized structure. The procedure permitted the structure to be determined from a molecular-replacement model that had 16% sequence identity and a 1.6 Å r.m.s.d. for C{sup α} atoms in comparison to the crystallized structure. The same approach could be used to solve other data sets affected by perfect hemihedral twinning from crystals with NCS.
Numerical simulation in applied geophysics
Santos, Juan Enrique
2016-01-01
This book presents the theory of waves propagation in a fluid-saturated porous medium (a Biot medium) and its application in Applied Geophysics. In particular, a derivation of absorbing boundary conditions in viscoelastic and poroelastic media is presented, which later is employed in the applications. The partial differential equations describing the propagation of waves in Biot media are solved using the Finite Element Method (FEM). Waves propagating in a Biot medium suffer attenuation and dispersion effects. In particular the fast compressional and shear waves are converted to slow diffusion-type waves at mesoscopic-scale heterogeneities (on the order of centimeters), effect usually occurring in the seismic range of frequencies. In some cases, a Biot medium presents a dense set of fractures oriented in preference directions. When the average distance between fractures is much smaller than the wavelengths of the travelling fast compressional and shear waves, the medium behaves as an effective viscoelastic an...
DEFF Research Database (Denmark)
Pilegaard, Hans Kristian
2016-01-01
The Nuss procedure is now the preferred operation for surgical correction of pectus excavatum (PE). It is a minimally invasive technique, whereby one to three curved metal bars are inserted behind the sternum in order to push it into a normal position. The bars are left in situ for three years...
Straightening out Legal Procedures
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
China’s top legislature mulls giving the green light to class action litigations The long-awaited amendment of China’s Civil Procedure Law has taken a crucial step.On October 28,the Standing Committee of the National People’s Congress(NPC),China’s top legislature,reviewed a draft amendment to the law for the first time.
Educational Accounting Procedures.
Tidwell, Sam B.
This chapter of "Principles of School Business Management" reviews the functions, procedures, and reports with which school business officials must be familiar in order to interpret and make decisions regarding the school district's financial position. Among the accounting functions discussed are financial management, internal auditing,…
Anxiety Around Medical Procedures
... understand that these procedures are necessary for fighting cancer, your child may not understand — and it is often hard ... Deaths Per Year 5-Year Survival Rate Childhood Cancer Infographics Webinars Parents Webinars Child Life Specialist Webinars School Personnel Webinars Video Library ...
Marchi, F; Guaita, L; Ribeiro, B; Castellano, M; Schaerer, D; Hathi, N P; Lemaux, B C; Grazian, A; Fevre, O Le; Garilli, B; Maccagni, D; Amorin, R; Bardelli, S; Cassata, P; Fontana, A; Koekemoer, A M; Brun, V Le; Tasca, L A M; Thomas, R; Vanzella, E; Zamorani, G; Zucca, E
2016-01-01
Determining the average fraction of Lyman continuum (LyC) photons escaping high redshift galaxies is essential to understand how reionization proceeded in the z>6 Universe. We want to measure the LyC signal from a sample of sources in the CDFS and COSMOS fields for which ultra-deep VIMOS spectroscopy as well as multi-wavelength HST imaging are available. We select a sample of 46 galaxies at $z\\sim 4$ from the VIMOS Ultra Deep Survey (VUDS) database, such that the VUDS spectra contain the LyC part of the spectra i.e. the rest-frame range $880-910\\AA$; taking advantage of the HST imaging we apply a careful cleaning procedure and reject all the sources showing nearby clumps with different colours, that could potentially be lower redshift interlopers. After this procedure the sample is reduced to 33 galaxies. We measure the ratio between ionizing flux (LyC at $895\\AA$) and non ionizing emission (at $\\sim 1500 \\AA$) for all individual sources. We also produce a normalized stacked spectrum of all sources. Assuming ...
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
49 CFR 1111.8 - Procedural schedule in stand-alone cost cases.
2010-10-01
... 49 Transportation 8 2010-10-01 2010-10-01 false Procedural schedule in stand-alone cost cases... § 1111.8 Procedural schedule in stand-alone cost cases. (a) Procedural schedule. Absent a specific order by the Board, the following general procedural schedule will apply in stand-alone cost cases: Day...
Averaged universe confronted to cosmological observations: a fully covariant approach
Wijenayake, Tharake; Ishak, Mustapha
2016-01-01
One of the outstanding problems in general relativistic cosmology is that of the averaging. That is, how the lumpy universe that we observe at small scales averages out to a smooth Friedmann-Lemaitre-Robertson-Walker (FLRW) model. The root of the problem is that averaging does not commute with the Einstein equations that govern the dynamics of the model. This leads to the well-know question of backreaction in cosmology. In this work, we approach the problem using the covariant framework of Macroscopic Gravity (MG). We use its cosmological solution with a flat FLRW macroscopic background where the result of averaging cosmic inhomogeneities has been encapsulated into a backreaction density parameter denoted $\\Omega_\\mathcal{A}$. We constrain this averaged universe using available cosmological data sets of expansion and growth including, for the first time, a full CMB analysis from Planck temperature anisotropy and polarization data, the supernovae data from Union 2.1, the galaxy power spectrum from WiggleZ, the...
Perceptual averaging in individuals with Autism Spectrum Disorder
Directory of Open Access Journals (Sweden)
Jennifer Elise Corbett
2016-11-01
Full Text Available There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value of a visual feature (e.g., mean size appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task despite poor accuracy in recalling individual circle sizes (member task. In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
A Simplified Procedure for Safety Requirements Derivation
Directory of Open Access Journals (Sweden)
Eugen Ioan Gergely
2008-05-01
Full Text Available The paper develops a procedure for analysis of PLC-controlled system risk due to component failure and for derivation of safety integrity requirements for components, focusing on software requirements. The approach allows fully integrated treatment of random and systematic failure. It can be applied at different levels of design detail and at different stages of the system development lifecycle. The procedure does not address how to assess failure rates, but provides a foundation for integrating PLC software assessment into system riskassessment and for making trade-offs in design.
Mesothelioma Applied Research Foundation
... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs Speak with Mary Hesdorffer, Nurse ... Percentage Donations Tribute Wall Other Giving/Fundraising Opportunities Bitcoin Donation Form FAQs © 2017 Mesothelioma Applied Research Foundation, ...
Jarodzka, Halszka
2011-01-01
Jarodzka, H. (2010, 12 November). Applied eye tracking research. Presentation and Labtour for Vereniging Gewone Leden in oprichting (VGL i.o.), Heerlen, The Netherlands: Open University of the Netherlands.
Genuine non-self-averaging and ultraslow convergence in gelation
Cho, Y. S.; Mazza, M. G.; Kahng, B.; Nagler, J.
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
The Conservation of Area Integrals in Averaging Transformations
Kuznetsov, E. D.
2010-06-01
It is shown for the two-planetary version of the weakly perturbed two-body problem that, in a system defined by a finite part of a Poisson expansion of the averaged Hamiltonian, only one of the three components of the area vector is conserved, corresponding to the longitudes measuring plane. The variability of the other two components is demonstrated in two ways. The first is based on calculating the Poisson bracket of the averaged Hamiltonian and the components of the area vector written in closed form. In the second, an echeloned Poisson series processor (EPSP) is used when calculating the Poisson bracket. The averaged Hamiltonian is taken with accuracy to second order in the small parameter of the problem, and the components of the area vector are expanded in a Poisson series.
An ɴ-ary λ-averaging based similarity classifier
Directory of Open Access Journals (Sweden)
Kurama Onesfole
2016-06-01
Full Text Available We introduce a new n-ary λ similarity classifier that is based on a new n-ary λ-averaging operator in the aggregation of similarities. This work is a natural extension of earlier research on similarity based classification in which aggregation is commonly performed by using the OWA-operator. So far λ-averaging has been used only in binary aggregation. Here the λ-averaging operator is extended to the n-ary aggregation case by using t-norms and t-conorms. We examine four different n-ary norms and test the new similarity classifier with five medical data sets. The new method seems to perform well when compared with the similarity classifier.
Testing averaged cosmology with type Ia supernovae and BAO data
Santos, B; Devi, N Chandrachani; Alcaniz, J S
2016-01-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard $\\Lambda$CDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Evolution of the average avalanche shape with the universality class.
Laurson, Lasse; Illa, Xavier; Santucci, Stéphane; Tore Tallakstad, Ken; Måløy, Knut Jørgen; Alava, Mikko J
2013-01-01
A multitude of systems ranging from the Barkhausen effect in ferromagnetic materials to plastic deformation and earthquakes respond to slow external driving by exhibiting intermittent, scale-free avalanche dynamics or crackling noise. The avalanches are power-law distributed in size, and have a typical average shape: these are the two most important signatures of avalanching systems. Here we show how the average avalanche shape evolves with the universality class of the avalanche dynamics by employing a combination of scaling theory, extensive numerical simulations and data from crack propagation experiments. It follows a simple scaling form parameterized by two numbers, the scaling exponent relating the average avalanche size to its duration and a parameter characterizing the temporal asymmetry of the avalanches. The latter reflects a broken time-reversal symmetry in the avalanche dynamics, emerging from the local nature of the interaction kernel mediating the avalanche dynamics.
Optimum orientation versus orientation averaging description of cluster radioactivity
Seif, W M; Refaie, A I; Amer, L H
2016-01-01
Background: The deformation of the nuclei involved in the cluster decay of heavy nuclei affect seriously their half-lives against the decay. Purpose: We investigate the description of the different decay stages in both the optimum orientation and the orientation-averaged pictures of the cluster decay process. Method: We consider the decays of 232,233,234U and 236,238Pu isotopes. The quantum mechanical knocking frequency and penetration probability based on the Wentzel-Kramers-Brillouin approximation are used to find the decay width. Results: We found that the orientation-averaged decay width is one or two orders of magnitude less than its value along the non-compact optimum orientation. The difference between the two values increases with decreasing the mass number of the emitted cluster. Correspondingly, the extracted preformation probability based on the averaged decay width increases with the same orders of magnitude compared to its value obtained considering the optimum orientation. The cluster preformati...
How do children form impressions of persons? They average.
Hendrick, C; Franz, C M; Hoving, K L
1975-05-01
The experiment reported was concerned with impression formation in children. Twelve subjects in each of Grades K, 2, 4, and 6 rated several sets of single trait words and trait pairs. The response scale consisted of a graded series of seven schematic faces which ranged from a deep frown to a happy smile. A basic question was whether children use an orderly integration rule in forming impressions of trait pairs. The answer was clear. At all grade levels a simple averaging model adequately accounted for pair ratings. A second question concerned how children resolve semantic inconsistencies. Responses to two highly inconsistent trait pairs suggested that subjects responded in the same fashion, essentially averaging the two traits in a pair. Overall, the data strongly supported an averaging model, and indicated that impression formation of children is similar to previous results obtained from adults.
越智, 貢
2014-01-01
With this essay I treat some problems raised by the new developments in science and technology, that is, those about Computer Ethics to show how and how far Applied Ethics differs from traditional ethics. I take up backgrounds on which Computer Ethics rests, particularly historical conditions of morality. Differences of conditions in time and space explain how Computer Ethics and Applied Ethics are not any traditional ethics in concrete cases. But I also investigate the normative rea...
The Role of the Harmonic Vector Average in Motion Integration
Directory of Open Access Journals (Sweden)
Alan eJohnston
2013-10-01
Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.
Stochastic procedures for extreme wave induced responses in flexible ships
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher; Andersen, Ingrid Marie Vincent; Seng, Sopheak
2014-01-01
estimation of extreme responses. Secondly, stochastic procedures using measured time series of responses as input are considered. The Peak-over-Threshold procedure and the Weibull fitting are applied and discussed for the extreme value predictions including possible corrections for clustering effects....
40 CFR 725.250 - Procedural requirements for the TERA.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Procedural requirements for the TERA... and Development Activities § 725.250 Procedural requirements for the TERA. General requirements for... following requirements apply to TERAs submitted under this subpart: (a) When to submit the TERA. Each...
49 CFR 587.16 - Adhesive bonding procedure.
2010-10-01
... 49 Transportation 7 2010-10-01 2010-10-01 false Adhesive bonding procedure. 587.16 Section 587.16... Adhesive bonding procedure. Immediately before bonding, aluminum sheet surfaces to be bonded are thoroughly... the abrading process are removed, as these can adversely affect bonding. The adhesive is applied...
Measurement of the average lifetime of b hadrons
Adriani, O.; Aguilar-Benitez, M.; Ahlen, S.; Alcaraz, J.; Aloisio, A.; Alverson, G.; Alviggi, M. G.; Ambrosi, G.; An, Q.; Anderhub, H.; Anderson, A. L.; Andreev, V. P.; Angelescu, T.; Antonov, L.; Antreasyan, D.; Arce, P.; Arefiev, A.; Atamanchuk, A.; Azemoon, T.; Aziz, T.; Baba, P. V. K. S.; Bagnaia, P.; Bakken, J. A.; Ball, R. C.; Banerjee, S.; Bao, J.; Barillère, R.; Barone, L.; Baschirotto, A.; Battiston, R.; Bay, A.; Becattini, F.; Bechtluft, J.; Becker, R.; Becker, U.; Behner, F.; Behrens, J.; Bencze, Gy. L.; Berdugo, J.; Berges, P.; Bertucci, B.; Betev, B. L.; Biasini, M.; Biland, A.; Bilei, G. M.; Bizzarri, R.; Blaising, J. J.; Bobbink, G. J.; Bock, R.; Böhm, A.; Borgia, B.; Bosetti, M.; Bourilkov, D.; Bourquin, M.; Boutigny, D.; Bouwens, B.; Brambilla, E.; Branson, J. G.; Brock, I. C.; Brooks, M.; Bujak, A.; Burger, J. D.; Burger, W. J.; Busenitz, J.; Buytenhuijs, A.; Cai, X. D.; Capell, M.; Caria, M.; Carlino, G.; Cartacci, A. M.; Castello, R.; Cerrada, M.; Cesaroni, F.; Chang, Y. H.; Chaturvedi, U. K.; Chemarin, M.; Chen, A.; Chen, C.; Chen, G.; Chen, G. M.; Chen, H. F.; Chen, H. S.; Chen, M.; Chen, W. Y.; Chiefari, G.; Chien, C. Y.; Choi, M. T.; Chung, S.; Civinini, C.; Clare, I.; Clare, R.; Coan, T. E.; Cohn, H. O.; Coignet, G.; Colino, N.; Contin, A.; Costantini, S.; Cotorobai, F.; Cui, X. T.; Cui, X. Y.; Dai, T. S.; D'Alessandro, R.; de Asmundis, R.; Degré, A.; Deiters, K.; Dénes, E.; Denes, P.; DeNotaristefani, F.; Dhina, M.; DiBitonto, D.; Diemoz, M.; Dimitrov, H. R.; Dionisi, C.; Ditmarr, M.; Djambazov, L.; Dova, M. T.; Drago, E.; Duchesneau, D.; Duinker, P.; Duran, I.; Easo, S.; El Mamouni, H.; Engler, A.; Eppling, F. J.; Erné, F. C.; Extermann, P.; Fabbretti, R.; Fabre, M.; Falciano, S.; Fan, S. J.; Fackler, O.; Fay, J.; Felcini, M.; Ferguson, T.; Fernandez, D.; Fernandez, G.; Ferroni, F.; Fesefeldt, H.; Fiandrini, E.; Field, J. H.; Filthaut, F.; Fisher, P. H.; Forconi, G.; Fredj, L.; Freudenreich, K.; Friebel, W.; Fukushima, M.; Gailloud, M.; Galaktionov, Yu.; Gallo, E.; Ganguli, S. N.; Garcia-Abia, P.; Gele, D.; Gentile, S.; Gheordanescu, N.; Giagu, S.; Goldfarb, S.; Gong, Z. F.; Gonzalez, E.; Gougas, A.; Goujon, D.; Gratta, G.; Gruenewald, M.; Gu, C.; Guanziroli, M.; Guo, J. K.; Gupta, V. K.; Gurtu, A.; Gustafson, H. R.; Gutay, L. J.; Hangarter, K.; Hartmann, B.; Hasan, A.; Hauschildt, D.; He, C. F.; He, J. T.; Hebbeker, T.; Hebert, M.; Hervé, A.; Hilgers, K.; Hofer, H.; Hoorani, H.; Hu, G.; Hu, G. Q.; Ille, B.; Ilyas, M. M.; Innocente, V.; Janssen, H.; Jezequel, S.; Jin, B. N.; Jones, L. W.; Josa-Mutuberria, I.; Kasser, A.; Khan, R. A.; Kamyshkov, Yu.; Kapinos, P.; Kapustinsky, J. S.; Karyotakis, Y.; Kaur, M.; Khokhar, S.; Kienzle-Focacci, M. N.; Kim, J. K.; Kim, S. C.; Kim, Y. G.; Kinnison, W. W.; Kirkby, A.; Kirkby, D.; Kirsch, S.; Kittel, W.; Klimentov, A.; Klöckner, R.; König, A. C.; Koffeman, E.; Kornadt, O.; Koutsenko, V.; Koulbardis, A.; Kraemer, R. W.; Kramer, T.; Krastev, V. R.; Krenz, W.; Krivshich, A.; Kuijten, H.; Kumar, K. S.; Kunin, A.; Landi, G.; Lanske, D.; Lanzano, S.; Lebedev, A.; Lebrun, P.; Lecomte, P.; Lecoq, P.; Le Coultre, P.; Lee, D. M.; Lee, J. S.; Lee, K. Y.; Leedom, I.; Leggett, C.; Le Goff, J. M.; Leiste, R.; Lenti, M.; Leonardi, E.; Li, C.; Li, H. T.; Li, P. J.; Liao, J. Y.; Lin, W. T.; Lin, Z. Y.; Linde, F. L.; Lindemann, B.; Lista, L.; Liu, Y.; Lohmann, W.; Longo, E.; Lu, Y. S.; Lubbers, J. M.; Lübelsmeyer, K.; Luci, C.; Luckey, D.; Ludovici, L.; Luminari, L.; Lustermann, W.; Ma, J. M.; Ma, W. G.; MacDermott, M.; Malik, R.; Malinin, A.; Maña, C.; Maolinbay, M.; Marchesini, P.; Marion, F.; Marin, A.; Martin, J. P.; Martinez-Laso, L.; Marzano, F.; Massaro, G. G. G.; Mazumdar, K.; McBride, P.; McMahon, T.; McNally, D.; Merk, M.; Merola, L.; Meschini, M.; Metzger, W. J.; Mi, Y.; Mihul, A.; Mills, G. B.; Mir, Y.; Mirabelli, G.; Mnich, J.; Möller, M.; Monteleoni, B.; Morand, R.; Morganti, S.; Moulai, N. E.; Mount, R.; Müller, S.; Nadtochy, A.; Nagy, E.; Napolitano, M.; Nessi-Tedaldi, F.; Newman, H.; Neyer, C.; Niaz, M. A.; Nippe, A.; Nowak, H.; Organtini, G.; Pandoulas, D.; Paoletti, S.; Paolucci, P.; Pascale, G.; Passaleva, G.; Patricelli, S.; Paul, T.; Pauluzzi, M.; Paus, C.; Pauss, F.; Pei, Y. J.; Pensotti, S.; Perret-Gallix, D.; Perrier, J.; Pevsner, A.; Piccolo, D.; Pieri, M.; Piroué, P. A.; Plasil, F.; Plyaskin, V.; Pohl, M.; Pojidaev, V.; Postema, H.; Qi, Z. D.; Qian, J. M.; Qureshi, K. N.; Raghavan, R.; Rahal-Callot, G.; Rancoita, P. G.; Rattaggi, M.; Raven, G.; Razis, P.; Read, K.; Ren, D.; Ren, Z.; Rescigno, M.; Reucroft, S.; Ricker, A.; Riemann, S.; Riemers, B. C.; Riles, K.; Rind, O.; Rizvi, H. A.; Ro, S.; Rodriguez, F. J.; Roe, B. P.; Röhner, M.; Romero, L.; Rosier-Lees, S.; Rosmalen, R.; Rosselet, Ph.; van Rossum, W.; Roth, S.; Rubbia, A.; Rubio, J. A.; Rykaczewski, H.; Sachwitz, M.; Salicio, J.; Salicio, J. M.; Sanders, G. S.; Santocchia, A.; Sarakinos, M. S.; Sartorelli, G.; Sassowsky, M.; Sauvage, G.; Schegelsky, V.; Schmitz, D.; Schmitz, P.; Schneegans, M.; Schopper, H.; Schotanus, D. J.; Shotkin, S.; Schreiber, H. J.; Shukla, J.; Schulte, R.; Schulte, S.; Schultze, K.; Schwenke, J.; Schwering, G.; Sciacca, C.; Scott, I.; Sehgal, R.; Seiler, P. G.; Sens, J. C.; Servoli, L.; Sheer, I.; Shen, D. Z.; Shevchenko, S.; Shi, X. R.; Shumilov, E.; Shoutko, V.; Son, D.; Sopczak, A.; Soulimov, V.; Spartiotis, C.; Spickermann, T.; Spillantini, P.; Starosta, R.; Steuer, M.; Stickland, D. P.; Sticozzi, F.; Stone, H.; Strauch, K.; Stringfellow, B. C.; Sudhakar, K.; Sultanov, G.; Sun, L. Z.; Susinno, G. F.; Suter, H.; Swain, J. D.; Syed, A. A.; Tang, X. W.; Taylor, L.; Terzi, G.; Ting, Samuel C. C.; Ting, S. M.; Tonutti, M.; Tonwar, S. C.; Tóth, J.; Tsaregorodtsev, A.; Tsipolitis, G.; Tully, C.; Tung, K. L.; Ulbricht, J.; Urbán, L.; Uwer, U.; Valente, E.; Van de Walle, R. T.; Vetlitsky, I.; Viertel, G.; Vikas, P.; Vikas, U.; Vivargent, M.; Vogel, H.; Vogt, H.; Vorobiev, I.; Vorobyov, A. A.; Vuilleumier, L.; Wadhwa, M.; Wallraff, W.; Wang, C.; Wang, C. R.; Wang, X. L.; Wang, Y. F.; Wang, Z. M.; Warner, C.; Weber, A.; Weber, J.; Weill, R.; Wenaus, T. J.; Wenninger, J.; White, M.; Willmott, C.; Wittgenstein, F.; Wright, D.; Wu, S. X.; Wynhoff, S.; Wysłouch, B.; Xie, Y. Y.; Xu, J. G.; Xu, Z. Z.; Xue, Z. L.; Yan, D. S.; Yang, B. Z.; Yang, C. G.; Yang, G.; Ye, C. H.; Ye, J. B.; Ye, Q.; Yeh, S. C.; Yin, Z. W.; You, J. M.; Yunus, N.; Yzerman, M.; Zaccardelli, C.; Zaitsev, N.; Zemp, P.; Zeng, M.; Zeng, Y.; Zhang, D. H.; Zhang, Z. P.; Zhou, B.; Zhou, G. J.; Zhou, J. F.; Zhu, R. Y.; Zichichi, A.; van der Zwaan, B. C. C.; L3 Collaboration
1993-11-01
The average lifetime of b hadrons has been measured using the L3 detector at LEP, running at √ s ≈ MZ. A b-enriched sample was obtained from 432538 hadronic Z events collected in 1990 and 1991 by tagging electrons and muons from semileptonic b hadron decays. From maximum likelihood fits to the electron and muon impact parameter distributions, the average b hadron lifetime was measured to be τb = (1535 ± 35 ± 28) fs, where the first error is statistical and the second includes both the experimental and the theoretical systematic uncertainties.
Modification of averaging process in GR: Case study flat LTB
Khosravi, Shahram; Mansouri, Reza
2007-01-01
We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.
Generalized Sampling Series Approximation of Random Signals from Local Averages
Institute of Scientific and Technical Information of China (English)
SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun
2007-01-01
Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.
Light shift averaging in paraffin-coated alkali vapor cells
Zhivun, Elena; Sudyka, Julia; Pustelny, Szymon; Patton, Brian; Budker, Dmitry
2015-01-01
Light shifts are an important source of noise and systematics in optically pumped magnetometers. We demonstrate that the long spin coherence time in paraffin-coated cells leads to spatial averaging of the light shifts over the entire cell volume. This renders the averaged light shift independent, under certain approximations, of the light-intensity distribution within the sensor cell. These results and the underlying mechanism can be extended to other spatially varying phenomena in anti-relaxation-coated cells with long coherence times.
Quantum state discrimination using the minimum average number of copies
Slussarenko, Sergei; Li, Jun-Gang; Campbell, Nicholas; Wiseman, Howard M; Pryde, Geoff J
2016-01-01
In the task of discriminating between nonorthogonal quantum states from multiple copies, the key parameters are the error probability and the resources (number of copies) used. Previous studies have considered the task of minimizing the average error probability for fixed resources. Here we consider minimizing the average resources for a fixed admissible error probability. We derive a detection scheme optimized for the latter task, and experimentally test it, along with schemes previously considered for the former task. We show that, for our new task, our new scheme outperforms all previously considered schemes.
THEORETICAL CALCULATION OF THE RELATIVISTIC SUBCONFIGURATION-AVERAGED TRANSITION ENERGIES
Institute of Scientific and Technical Information of China (English)
张继彦; 杨向东; 杨国洪; 张保汉; 雷安乐; 刘宏杰; 李军
2001-01-01
A method for calculating the average energies of relativistic subconfigurations in highly ionized heavy atoms has been developed in the framework of the multiconfigurational Dirac-Fock theory. The method is then used to calculate the average transition energies of the spin-orbit-split 3d-4p transition of Co-like tungsten, the 3d-5f transition of Cu-like tantalum, and the 3d-5f transitions of Cu-like and Zn-like gold samples. The calculated results are in good agreement with those calculated with the relativistic parametric potential method and also with the experimental results.
HAT AVERAGE MULTIRESOLUTION WITH ERROR CONTROL IN 2-D
Institute of Scientific and Technical Information of China (English)
Sergio Amat
2004-01-01
Multiresolution representations of data are a powerful tool in data compression. For a proper adaptation to the singularities, it is crucial to develop nonlinear methods which are not based on tensor product. The hat average framework permets develop adapted schemes for all types of singularities. In contrast with the wavelet framework these representations cannot be considered as a change of basis, and the stability theory requires different considerations. In this paper, non separable two-dimensional hat average multiresolution processing algorithms that ensure stability are introduced. Explicit error bounds are presented.
MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL
Directory of Open Access Journals (Sweden)
V.S. Bochko
2006-09-01
Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.
[Sedation using ketamine for pain procedures in Pediatric Oncology.].
Ricard, C; Tichit, R; Troncin, R; Bernard, F
2009-09-01
Procedural sedation and analgesia for children is widely practiced. Since 2005 to 2007, we evaluated the safety and efficacy of ketamine to control pain induced by diagnostic procedures in pediatric oncology patients. Eight hundred fifty procedures were carried out in 125 patients aged 2 to 16 years. We associated EMNO (inhaled equimolar mixture of nitrous oxide and oxygen), atropin (oral or rectal), midazolam (oral or rectal) and ketamin (intravenous). An anesthesiologist injected ketamin. Average dose of ketamine was 0.33 to 2 mg/kg depending on number and invasiveness of procedures. This method requires careful monitoring and proper precautions. With these conditions, no complication was observed. All patients were effectively sedated. These results indicate that ketamine - in association with EMNO, atropine and midazolam - is safe and effective in pain management induced by diagnostic procedures in pediatric oncology patients. The sedative regimen of intravenous ketamine has greatly reduced patient, family and practitioners anxiety for diagnostic and therapeutic procedures.
Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN
Quinlan, Jesse; McDaniel, James; Baurle, Robert A.
2013-01-01
Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.
PREDICTION OF LACTATION YIELD FROM LAST-RECORD DAY AND AVERAGE DAILY YIELD IN NILI-RAVI BUFFALOES
Directory of Open Access Journals (Sweden)
M. S. Khan, A. U. Hyder, I. R. Bajwa, M. S. Rehman and F. Hassan
2005-10-01
Full Text Available Different adjustment procedures were compared to determine if prediction of lactation milk yield using last record day information could be improved by using information on the average daily milk yield of the recorded lactation. Weekly milk yield records of 993 Nili-Ravi buffaloes for 2704 lactations were used for the study. Comparison of different procedures of lactation milk yield adjustment from partial/incomplete or complete lactations indicated that milk yield predicted from a linear regression equation, or from last test day information, was higher as compared to actual milk yield due to extrapolation to a higher base. Simple linear regression procedure overestimated the yield, especially in the later part of the lactation curve. Most precise adjustments were obtained when last test day and average daily milk yield information were included as predictors. The standard deviation of bias decreased and correlation between actual and predicted lactation milk yield improved with inclusion of average daily milk yield as a predictor along with the last test day milk yield. Last recorded milk yield information along with average daily yield of the recorded lactation period are suggested to be used for standardization of milk yield data in Nili-Ravi buffaloes.
Fornai, Carla; Madotto, Fabiana; Romanelli, Anna; Pepe, Pasquale; Raciti, Mauro; Tancioni, Valeria; Chini, Francesco; Trerotoli, Paolo; Bartolomeo, Nicola; Serio, Gabriella; Cesana, Giancarlo; Corrao, Giovanni
2008-01-01
Abstract Objective: to compare record linkage (RL) procedures adopted in several Italian settings and a standard probabilistic RL procedure for matching data from electronic health care databases. Design: two health care archives are matched: the hospital discharges (HD) archive and the population registry of four Italian areas. Exact deterministic, stepwise deterministic techniques and a standard probabilistic RL procedure are applied to match HD for acute myocardial infarction (AMI) and dia...
Directory of Open Access Journals (Sweden)
AA. Khoshkhonejad
1994-06-01
Full Text Available Nowadays, due to recent developments and researches in dental science, it is possible to preserve and restore previously extracted cases such as teeth with extensive caries, fractured or less appropriate cases for crown coverage as well as teeth with external perforation caused by restorative pins. In order to restore the teeth with preservation of periodontium, we should know thoroughly physiological aspects of periodontium and protection of Biologic Width which is formed by epithelial and supracrestal connective tissue connections. Considering biologic width is one of the principal rules of teeth restoration, otherwise we may destruct periodontal tissues. Several factors are involved in placing a restoration and one of the most important ones is where the restoration margin is terminated. Many studies have been conducted on the possible effects of restoration margin on the gingiva and due to the results of these studies it was concluded that restoration margin should be finished supragingivally. However, when we have to end the restoration under Gingival Crest, First a healthy gingival sulcus is required. Also, we should not invade the biological width. Since a normal biologic with is reported 2 mm and sound tooth tissue should be placed at least 2 mm coronal to the epithelial tissue, the distance between sound tooth tissue and crown margin should be at least 4mm. Thus, performing crown lengthening is essential to increase the clinical crown length. Basically, two objectives are considered: 1 restorative 2 esthetic (gummy smile Surgical procedure includes gingivectomy and flap procedure. Orthodontic procedure involves orthodontic extrusion or force eruption technique which is controlled vertical movements of teeth into occlusion. Besides, this procedure can also used to extrude teeth defects from the gingival tissue. By crown lengthening, tooth extraction is not required and furthermore, adjacent teeth preparation for placing a fixed
Implementation of a new picking procedure in the Antelope software
Tiberi, Lara; Costa, Giovanni; Spallarossa, Daniele
2014-05-01
Automatic estimates of earthquake parameters continues to be of considerable interest to the seismological community. In this study we present a new automatic procedure for a quasi real-time location of events. This procedure is a combination of the solid and tested Antelope sotfware with a new picking procedure, the AutoPicker (DipTeRiS, University of Genova). Antelope picking procedure consists on: a) Prefiltering into different frequency pass bands; b) Run STA/LTA detectors in one or more channels of the waveform data; c) Associate event locations by searching over one or more spatial grids for a candidate hypocenter that produces theoretical time moveout (P and S) to each station that most closely matches the observations. The main characteristics of the AutoPicker picking algorithm are: a) Pre-filtering and envelope calculation to prearrange the onset; b) Preliminary detection of P onset using the AIC based picker; c) P validation, Signal Variance/Noise Variance analysis sample by sample; d) Preliminary earthquake location; e) Detection of S onset adopting the AIC based picker; f) S/N analysis, S validation; g) Earthquake location. We have applied these two automatic procedures to the Emilia sequence occurred in May-June 2012. In this comparison the distribution of the differences between the manual and the two automatic P-onset are comparable. The average values of P differences are similar, but we have to point out that the AutoPicker procedure gives a st.deviation value lower than the Antelope ones and most important it picks the 16% of phases more than the other algoritm. For S-phases the AutoPicker algorithm picks 178 phases with a mean value of 0.09 sec, instead of the 16s of Antelope with a mean value of 3.75 sec. For more than 90% events the epicentral differences of AutoPicker is less than 5 kms, instead of the Antelope differences which are less than 10 kms. For the depth differences the mean values and the distributions of the two procedures are
Energy Technology Data Exchange (ETDEWEB)
Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)
1995-10-01
This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.
Loop electrosurgical excisional procedure.
Mayeaux, E J; Harper, M B
1993-02-01
Loop electrosurgical excisional procedure, or LEEP, also known as loop diathermy treatment, loop excision of the transformation zone (LETZ), and large loop excision of the transformation zone (LLETZ), is a new technique for outpatient diagnosis and treatment of dysplastic cervical lesions. This procedure produces good specimens for cytologic evaluation, carries a low risk of affecting childbearing ability, and is likely to replace cryotherapy or laser treatment for cervical neoplasias. LEEP uses low-current, high-frequency electrical generators and thin stainless steel or tungsten loops to excise either lesions or the entire transformation zone. Complication rates are comparable to cryotherapy or laser treatment methods and include bleeding, incomplete removal of the lesion, and cervical stenosis. Compared with other methods, the advantages of LEEP include: removal of abnormal tissue in a manner permitting cytologic study, low cost, ease of acquiring necessary skills, and the ability to treat lesions with fewer visits. Patient acceptance of the procedure is high. Widespread use of LEEP by family physicians can be expected.
Directory of Open Access Journals (Sweden)
Diana Marin
2013-10-01
Full Text Available Values of average daily gain of weight are calculated according to the ratio of total growth and total number of days of feeding. In the case of the four commercial hybrids intensively exploited was observed, as test applied, that there were no statistically significant differences in terms of average daily gain of these hybrids, but the lowest values of this index were recorded in hybrid B (with Large White as terminal boar.
A Formula of Average Path Length for Unweighted Networks
Institute of Scientific and Technical Information of China (English)
LIU Chun-Ping; LIU Yu-Rong; HE Da-Ren; ZHU Lu-Jin
2008-01-01
In this paper, based on the adjacency matrix of the network and its powers, the formulas are derived for the shortest path and the average path length, and an effective algorithm is presented. Furthermore, an example is provided to demonstrate the proposed method.
Multiscale Gossip for Efficient Decentralized Averaging in Wireless Packet Networks
Tsianos, Konstantinos I
2010-01-01
This paper describes and analyzes a hierarchical gossip algorithm for solving the distributed average consensus problem in wireless sensor networks. The network is recursively partitioned into subnetworks. Initially, nodes at the finest scale gossip to compute local averages. Then, using geographic routing to enable gossip between nodes that are not directly connected, these local averages are progressively fused up the hierarchy until the global average is computed. We show that the proposed hierarchical scheme with $k$ levels of hierarchy is competitive with state-of-the-art randomized gossip algorithms, in terms of message complexity, achieving $\\epsilon$-accuracy with high probability after $O\\big(n \\log \\log n \\log \\frac{kn}{\\epsilon} \\big)$ messages. Key to our analysis is the way in which the network is recursively partitioned. We find that the optimal scaling law is achieved when subnetworks at scale $j$ contain $O(n^{(2/3)^j})$ nodes; then the message complexity at any individual scale is $O(n \\log \\...