WorldWideScience

Sample records for average procedures applied

  1. Effects of Video-Based and Applied Problems on the Procedural Math Skills of Average- and Low-Achieving Adolescents.

    Science.gov (United States)

    Bottge, Brian A.; Heinrichs, Mary; Chan, Shih-Yi; Mehta, Zara Dee; Watson, Elizabeth

    2003-01-01

    This study examined effects of video-based, anchored instruction and applied problems on the ability of 11 low-achieving (LA) and 26 average-achieving (AA) eighth graders to solve computation and word problems. Performance for both groups was higher during anchored instruction than during baseline, but no differences were found between instruction…

  2. Photogrammetry procedures applied to anthropometry.

    Science.gov (United States)

    Okimoto, Maria Lúcialeite Ribeiro; Klein, Alison Alfred

    2012-01-01

    This study aims to evaluate the reliability and establish procedures for the use of digital photogrammetry in anthropometric measurements of the human hand. The methodology included the construction of a platform to allow the placement of the hand always equivalent to a distance of the camera lens and to annul the effects of parallax. We developed a software to perform the measurements from the images and built up a subject of proof in a cast from a negative mold, this object was subjected to measurements with digital photogrammetry using the data collection platform in caliper and the Coordinate Measuring Machine (MMC). The results of the application of photogrammetry in the data collection segment hand, allow us to conclude that photogrammetry is an effective presenting precision coefficient below 0.940. Within normal and acceptable values, given the magnitude of the data used in anthropometry. It was concluded photogrammetry then be reliable, accurate and efficient for carrying out anthropometric surveys of population, and presents less difficulty to collect in-place.

  3. Spatial Averaging Combined with a Perturbation/Iteration Procedure

    Directory of Open Access Journals (Sweden)

    F. E. C. Culick

    2012-09-01

    have caused some confusion. The paper ends with a brief discussion answering a serious criticism, of the method, nearly fifteen years old. The basis for the criticism, arising from solution to a relatively simple problem, is shown to be a result of an omission of a term that arises when the average density in a flow changes abruptly. Presently, there is no known problem of combustion instability for which the kind of analysis discussed here is not applicable. The formalism is general; much effort is generally required to apply the analysis to a particular problem. A particularly significant point, not elaborated here, is the inextricable dependence on expansion of the equations and their boundary conditions, in two small parameters, measures of the steady and unsteady flows. Whether or not those Mach numbers are actually ‘small’ in fact, is really beside the point. Work out applications of the method as if they were! Then maybe to get more accurate results, resort to some form of CFD. It is a huge practical point that the approach taken and advocated here cannot be expected to give precise results, but however accurate they may be, they will be obtained with relative ease and will always be instructive. In any case, the expansions must be carried out carefully with faithful attention to the rules of systematic procedures. Otherwise, inadvertent errors may arise from inclusion or exclusion of contributions. I state without proof or further examples that the general method discussed here has been quite well and widely tested for practical systems much more complex than those normally studied in the laboratory. Every case has shown encouraging results. Thus the lifetimes of approximate analyses developed before computing resources became commonplace seem to be very long indeed.

  4. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  5. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Heneghan C

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as -means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of and specificity of .

  6. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Directory of Open Access Journals (Sweden)

    C. O'Brien

    2007-01-01

    Full Text Available We present a technique called corrected integral shape averaging (CISA for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression, and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as k-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of 81% and specificity of 84%.

  7. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    Science.gov (United States)

    Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.

    2007-12-01

    We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].

  8. Effects of measurement procedure and equipment on average room acoustic measurements

    DEFF Research Database (Denmark)

    Gade, Anders Christian; Bradley, J S; Siebein, G W

    1993-01-01

    This paper reports the results of a measurement tour of nine U.S. concert halls. Three measurements teams, from the University of Florida, the National Research Council of Canada, and the Technical University of Denmark, made parallel sets of measurements using their own equipment and procedures....... In some of the halls measurements were repeated using the procedures of the other teams to make it possible to separate the effects of different equipment and different procedures. The paper will present position-averaged results from the three teams and will discuss reasons for the differences observed...

  9. Procedure for the characterization of radon potential in existing dwellings and to assess the annual average indoor radon concentration

    International Nuclear Information System (INIS)

    Collignan, Bernard; Powaga, Emilie

    2014-01-01

    Risk assessment due to radon exposure indoors is based on annual average indoor radon activity concentration. To assess the radon exposure in a building, measurement is generally performed during at least two months during heating period in order to be representative of the annual average value. This is because radon presence indoors could be very variable during time. This measurement protocol is fairly reliable but may be a limiting in the radon risk management, particularly during a real estate transaction due to the duration of the measurement and the limitation of the measurement period. A previous field study defined a rapid methodology to characterize radon entry in dwellings. The objective of this study was at first, to test this methodology in various dwellings to assess its relevance with a daily test. At second, a ventilation model was used to assess numerically the air renewal of a building, the indoor air quality all along the year and the annual average indoor radon activity concentration, based on local meteorological conditions, some building characteristics and in-situ characterization of indoor pollutant emission laws. Experimental results obtained on thirteen individual dwellings showed that it is generally possible to obtain a representative characterization of radon entry into homes. It was also possible to refine the methodology defined in the previous study. In addition, numerical assessments of annual average indoor radon activity concentration showed generally a good agreement with measured values. These results are encouraging to allow a procedure with a short measurement time to be used to characterize long-term radon potential in dwellings. - Highlights: • Test of a daily procedure to characterize radon potential in dwellings. • Numerical assessment of the annual radon concentration. • Procedure applied on thirteen dwellings, characterization generally satisfactory. • Procedure useful to manage radon risk in dwellings, for real

  10. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...... continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling...... provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios....

  11. Neural networks prediction and fault diagnosis applied to stationary and non stationary ARMA (Autoregressive moving average) modeled time series

    International Nuclear Information System (INIS)

    Marseguerra, M.; Minoggio, S.; Rossi, A.; Zio, E.

    1992-01-01

    The correlated noise affecting many industrial plants under stationary or cyclo-stationary conditions - nuclear reactors included -has been successfully modeled by autoregressive moving average (ARMA) due to the versatility of this technique. The relatively recent neural network methods have similar features and much effort is being devoted to exploring their usefulness in forecasting and control. Identifying a signal by means of an ARMA model gives rise to the problem of selecting its correct order. Similar difficulties must be faced when applying neural network methods and, specifically, particular care must be given to the setting up of the appropriate network topology, the data normalization procedure and the learning code. In the present paper the capability of some neural networks of learning ARMA and seasonal ARMA processes is investigated. The results of the tested cases look promising since they indicate that the neural networks learn the underlying process with relative ease so that their forecasting capability may represent a convenient fault diagnosis tool. (Author)

  12. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  13. Auto-adaptive averaging: Detecting artifacts in event-related potential data using a fully automated procedure.

    NARCIS (Netherlands)

    Talsma, D.

    2008-01-01

    The auto-adaptive averaging procedure proposed here classifies artifacts in event-related potential data by optimizing the signal-to-noise ratio. This method rank orders single trials according to the impact of each trial on the ERP average. Then, the minimum residual background noise level in the

  14. Evaluating different sedative drugs applied in procedural sedation

    Directory of Open Access Journals (Sweden)

    Azadeh Tafakori

    2014-12-01

    Full Text Available There are various criteria that affect the efficacy of the procedural sedation strategies required for performing different processes in emergency departments. Selecting the most effective and the safest sedative with or without analgesic effect for every individual patients and intervention is one of the main parts of the each emergency department practices. Based on previous studies, various sedative agents have been proposed, which have different benefits and adverse effects including propofol, ketamine, etomidate etc. Different side effects of administrating each drug, alone or in combination with each other, have been proposed such as vomiting, respiratory depression, hypoxia, hypotension and cardiac arrest. In this study we aimed to briefly review the properties of applied sedatives in different studies and also mention few related clinical trials with proper blinding, which were conducted to evaluate the efficacy of the sedative in procedural sedation.

  15. An automated data quality control procedure applied to a mesoscale meteorological network

    Science.gov (United States)

    Ranci, M.; Lussana, C.

    2009-09-01

    The mesoscale meteorological networks are composed by hundreds of stations providing continuous measurements of several meteorological variables. The large amount of observations collected at the data acquisition center must be checked using automatic Data Quality Control (DQC) tests. An automated DQC procedure describes the application of each individual test and the related decision making algorithms. The goal of a DQC procedure is to supply an efficient and powerful tool to the meteorological analyst. This work presents an automated DQC procedure and its application to the mesoscale meteorological network of the Lombardia's public weather service (ARPA). In particular, the DQC procedure is applied to hourly average observations of: temperature, relative humidity, wind velocity and direction, global solar radiation, net radiation and hourly cumulated precipitation. The main idea of the DQC procedure is that each observation undergoes simultaneously many different tests and only once obtained all the results a decision about the observation quality is taken. The implemented tests are variable-dependent but can be classified as: plausible values checks, temporal and spatial consistency checks. Finally, a close inspection of the DQC procedure behavior can also be useful to individuate critical parameters that can be used for the network performance monitoring. The application of the DQC procedure to some case-studies is reported in order to show the characteristics of the overall procedure. The procedure is still under development, nevertheless the first results respect to its integration in the DQC operative activities are very encouraging.

  16. A DES Procedure Applied to a Wall-Mounted Hump

    Directory of Open Access Journals (Sweden)

    Radoslav Bozinoski

    2012-01-01

    Full Text Available This paper describes a detached-eddy simulation (DES for the flow over a wall-mounted hump. The Reynolds number based on the hump chord is Rec=9.36×105 with an in-let Mach number of 0.1. Solutions of the three-dimensional Reynolds-averaged Navier-Stokes (RANS procedure are obtained using the Wilcox k−ω equations. The DES results are obtained using the model presented by Bush and Mani and are compared with RANS solutions and experimental data from NASA's 2004 Computational Fluid Dynamics Validation on Synthetic Jets and Turbulent Separation Control Workshop. The DES procedure exhibited a three-dimensional flow structure in the wake, with a 13.65% shorter mean separation region compared to RANS and a mean reattachment length that is in good agreement with experimental measurements. DES predictions of the pressure coefficient in the separation region also exhibit good agreement with experiment and are more accurate than RANS predictions.

  17. Phase-Averaged Method Applied to Periodic Flow Between Shrouded Corotating Disks

    Directory of Open Access Journals (Sweden)

    Shen-Chun Wu

    2003-01-01

    Full Text Available This study investigates the coherent flow fields between corotating disks in a cylindrical enclosure. By using two laser velocimeters and a phase-averaged technique, the vortical structures of the flow could be reconstructed and their dynamic behavior was observed. The experimental results reveal clearly that the flow field between the disks is composed of three distinct regions: an inner region near the hub, an outer region, and a shroud boundary layer region. The outer region is distinguished by the presence of large vortical structures. The number of vortical structures corresponds to the normalized frequency of the flow.

  18. Applied field test procedures on petroleum release sites

    International Nuclear Information System (INIS)

    Gilbert, G.; Nichols, L.

    1995-01-01

    The effective remediation of petroleum contaminated soils and ground water is a significant issue for Williams Pipe Line Co. (Williams): costing $6.8 million in 1994. It is in the best interest, then, for Williams to adopt approaches and apply technologies that will be both cost-effective and comply with regulations. Williams has found the use of soil vapor extraction (SVE) and air sparging (AS) field test procedures at the onset of a petroleum release investigation/remediation accomplish these goals. This paper focuses on the application of AS/SVE as the preferred technology to a specific type of remediation: refined petroleum products. In situ field tests are used prior to designing a full-scale remedial system to first validate or disprove initial assumptions on applicability of the technology. During the field test, remedial system design parameters are also collected to tailor the design and operation of a full-scale system to site specific conditions: minimizing cost and optimizing effectiveness. In situ field tests should be designed and operated to simulate as close as possible the operation of a full-scale remedial system. The procedures of an in situ field test will be presented. The results of numerous field tests and the associated costs will also be evaluated and compared to full-scale remedial systems and total project costs to demonstrate overall effectiveness. There are many advantages of As/SVE technologies over conventional fluid extraction or SVE systems alone. However, the primary advantage is the ability to simultaneously reduce volatile and biodegradable compound concentrations in the phreatic, capillary fringe, and unsaturated zones

  19. Applying computer-based procedures in nuclear power plants

    International Nuclear Information System (INIS)

    Oliveira, Mauro V. de; Carvalho, Paulo V.R. de; Santos, Isaac J.A.L. dos; Grecco, Claudio H.S.; Bruno, Diego S.

    2009-01-01

    Plant operation procedures are used to guide operators in coping with normal, abnormal or emergency situations in a process control system. Historically, the plant procedures have been paper-based (PBP), with the digitalisation trend in these complex systems computer-based procedures (CBPs) are being developed to support procedure use. This work shows briefly the research on CBPs at the Human-System Interface Laboratory (LABIHS). The emergency operation procedure EOP-0 of the LABIHS NPP simulator was implemented in the ImPRO CBP system. The ImPRO system was chosen for test because it is available for download in the Internet. A preliminary operation test using the implemented procedure in the CBP system was realized and the results were compared to the operation through PBP use. (author)

  20. Combining experts' risk judgments on technology performance of phytoremediation: self-confidence ratings, averaging procedures, and formative consensus building.

    Science.gov (United States)

    Scholz, Roland W; Hansmann, Ralf

    2007-02-01

    Expert panels and averaging procedures are common means for coping with the uncertainty of effects of technology application in complex environments. We investigate the connection between confidence and the validity of expert judgment. Moreover, a formative consensus building procedure (FCB) is introduced that generates probability statements on the performance of technologies, and we compare different algorithms for the statistical aggregation of individual judgments. The case study refers to an expert panel of 10 environmental scientists assessing the performance of a soil cleanup technology that uses the capability of certain plants to accumulate heavy metals from the soil in the plant body (phytoremediation). The panel members first provided individual statements on the effectiveness of a phytoremediation. Such statements can support policymakers, answering the questions concerning the expected performance of the new technology in contaminated areas. The present study reviews (1) the steps of the FCB, (2) the constraints of technology application (contaminants, soil structure, etc.), (3) the measurement of expert knowledge, (4) the statistical averaging and the discursive agreement procedures, and (5) the boundaries of application for the FCB method. The quantitative statement oriented part of FCB generates terms such as: "The probability that the concentration of soil contamination will be reduced by at least 50% is 0.8." The data suggest that taking the median of the individual expert estimates provides the most accurate aggregated estimate. The discursive agreement procedure of FCB appears suitable for deriving politically relevant singular statements rather than for obtaining comprehensive information about uncertainties as represented by probability distributions.

  1. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, Andrew P.; Kabilan, Senthil; Carson, James P.; Corley, Richard A.; Einstein, Daniel R.

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton’s Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple

  2. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kuprat, A.P., E-mail: andrew.kuprat@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Kabilan, S., E-mail: senthil.kabilan@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Carson, J.P., E-mail: james.carson@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Corley, R.A., E-mail: rick.corley@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States); Einstein, D.R., E-mail: daniel.einstein@pnnl.gov [Fundamental and Computational Sciences Directorate, Pacific Northwest National Laboratory, Richland, WA (United States)

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  3. A bidirectional coupling procedure applied to multiscale respiratory modeling

    International Nuclear Information System (INIS)

    Kuprat, A.P.; Kabilan, S.; Carson, J.P.; Corley, R.A.; Einstein, D.R.

    2013-01-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFDs) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the modified Newton’s method with nonlinear Krylov accelerator developed by Carlson and Miller [1], Miller [2] and Scott and Fenves [3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a “pressure-drop” residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD–ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural

  4. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  5. 49 CFR 40.383 - What procedures apply if you contest the issuance of a PIE?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What procedures apply if you contest the issuance of a PIE? 40.383 Section 40.383 Transportation Office of the Secretary of Transportation PROCEDURES... What procedures apply if you contest the issuance of a PIE? (a) DOT conducts PIE proceedings in a fair...

  6. 34 CFR 370.43 - What requirement applies to the use of mediation procedures?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirement applies to the use of mediation... applies to the use of mediation procedures? (a) Each designated agency shall implement procedures designed to ensure that, to the maximum extent possible, good faith negotiations and mediation procedures are...

  7. CARVEDILOL POPULATION PHARMACOKINETIC ANALYSIS – APPLIED VALIDATION PROCEDURE

    Directory of Open Access Journals (Sweden)

    Aleksandra Catić-Đorđević

    2013-09-01

    Full Text Available Carvedilol is a nonselective beta blocker/alpha-1 blocker, which is used for treatment of essential hypertension, chronic stable angina, unstable angina and ischemic left ventricular dysfunction. The aim of this study was to describe carvedilol population pharmacokinetic (PK analysis as well as the validation of analytical procedure, which is an important step regarding this approach. In contemporary clinical practice, population PK analysis is often more important than standard PK approach in setting a mathematical model that describes the PK parameters. Also, it includes the variables that have particular importance in the drugs pharmacokinetics such as sex, body mass, dosage, pharmaceutical form, pathophysiological state, disease associated with the organism or the presence of a specific polymorphism in the isoenzyme important for biotransformation of the drug. One of the most frequently used approach in population PK analysis is the Nonlinear Modeling of Mixed Effects - NONMEM modeling. Analytical methods used in the data collection period is of great importance for the implementation of a population PK analysis of carvedilol in order to obtain reliable data that can be useful in clinical practice. High performance liquid chromatography (HPLC analysis of carvedilol is used to confirm the identity of a drug and provide quantitative results and also to monitor the efficacy of the therapy. Analytical procedures used in other studies could not be fully implemented in our research as it was necessary to perform certain modification and validation of the method with the aim of using the obtained results for the purpose of a population pharmacokinetic analysis. Validation process is a logical terminal phase of analytical procedure development that provides applicability of the procedure itself. The goal of validation is to ensure consistency of the method and accuracy of results or to confirm the selection of analytical method for a given sample

  8. Outcomes of the Remplissage Procedure and Its Effects on Return to Sports: Average 5-Year Follow-up.

    Science.gov (United States)

    Garcia, Grant H; Wu, Hao-Hua; Liu, Joseph N; Huffman, G Russell; Kelly, John D

    2016-05-01

    Short-term outcomes for patients with large, engaging Hill-Sachs lesions who underwent remplissage have demonstrated good results. However, limited data are available for longer term outcomes. To evaluate the long-term outcomes of remplissage and determine the long-term rate of return to specific sports postoperatively. Case series; Level of evidence, 4. This was a retrospective review of patients treated with the remplissage procedure from 2007 to 2013. All underwent preoperative magnetic resonance imaging demonstrating large Hill-Sachs lesions by the Rowe criteria and glenoid bone loss sports, employment, physical activities, and dislocation events. A total of 50 patients (51 shoulders) were included in the study. The average patient age at surgery was 29.8 years (range, 15.0-72.4 years), and the average follow-up time was 60.7 months (range, 25.5-97.6 months); 20.0% of patients underwent previous surgery on their shoulder. The average postoperative WOSI score was 79.5%, and the average ASES score was 89.3. Six shoulders had dislocation events (11.8%) postoperatively: 3 were traumatic, and 3 were atraumatic. Increased preoperative dislocations led to a greater risk of a postoperative dislocation (P sports was 95.5% of patients at an average of 7.0 months postoperatively; 81.0% returned to their previous intensity and level of sport. Of patients who played a throwing sport, 65.5% (n = 19) stated that they had problems throwing, and 58.6% (n = 17) felt that they could not normally wind up throwing a ball. Direct rates of return to overhead sports were volleyball, 100%; basketball, 69%; baseball, 50%; and football, 50%. The redislocation rate after remplissage was 11.8% at an average of 5 years, with 95.5% of patients returning to full sports at an average of 7 months. For throwing sports, 65.5% of patients complained of decreased range of motion during throwing. The results should be considered preoperatively in candidates for remplissage who are engaged in

  9. 42 CFR 137.373 - Do Federal real property laws, regulations and procedures that apply to the Secretary also apply...

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Do Federal real property laws, regulations and... HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Other § 137.373 Do Federal real property laws, regulations and procedures that apply to the Secretary also apply to Self-Governance Tribes that purchase real...

  10. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT

    International Nuclear Information System (INIS)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-01-01

    In this document the quality control procedures applied to the CMS muon drift chambers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chamber test handbook for beginners. (Author) 3 refs

  11. 20 CFR 408.1045 - What procedures apply if you request an ALJ hearing?

    Science.gov (United States)

    2010-04-01

    ... rules. For purposes of this part, we use the same rules on ALJ hearing procedures that we use in the... section. (b) Exceptions. (1) In § 416.1446(b)(1), the last sentence does not apply under this part. (2) In...

  12. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    Science.gov (United States)

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  13. 14 CFR 382.127 - What procedures apply to stowage of battery-powered mobility aids?

    Science.gov (United States)

    2010-01-01

    ...-powered mobility aids? 382.127 Section 382.127 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT... DISABILITY IN AIR TRAVEL Stowage of Wheelchairs, Other Mobility Aids, and Other Assistive Devices § 382.127 What procedures apply to stowage of battery-powered mobility aids? (a) Whenever baggage compartment...

  14. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia, E-mail: menezes@cdtn.b, E-mail: dovenir@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Lab. de Ativacao Neutronica (Brazil)

    2009-07-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  15. Pretreatment procedures applied to samples to be analysed by neutron activation analysis at CDTN/CNEN

    International Nuclear Information System (INIS)

    Francisco, Dovenir; Menezes, Maria Angela de Barros Correia

    2009-01-01

    The neutron activation technique - using several methods - has been applied in 80% of the analytical demand of Division for Reactor and Analytical Techniques at CDTN/CNEN, Belo Horizonte, Minas Gerais. This scenario emphasizes the responsibility of the Laboratory to provide and assure the quality of the measurements. The first step to assure the results quality is the preparation of the samples. Therefore, this paper describes the experimental procedures adopted at CDTN/CNEN in order to uniform conditions of analysis and to avoid contaminations by elements present everywhere. Some of the procedures are based on methods described in the literature; others are based on many years of experience preparing samples from many kinds of matrices. The procedures described are related to geological material - soil, sediment, rock, gems, clay, archaeological ceramics and ore - biological materials - hair, fish, plants, food - water, etc. Analytical results in sediment samples are shown as n example pointing out the efficiency of the experimental procedure. (author)

  16. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    Science.gov (United States)

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Containment integrity and leak testing. Procedures applied and experiences gained in European countries

    International Nuclear Information System (INIS)

    1987-01-01

    Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries

  18. Resolution on procedures for applying for work permits to undertake professional training, 10 October 1988.

    Science.gov (United States)

    1988-01-01

    This Resolution contains procedures for applying for professional training work permits. It provides that those foreigners who wish to come to Spain for a limited period of time in order to become fully qualified in Spanish commercial and professional customs while occupying a training work position are eligible to apply for this permit. The permit is not valid for more than 12 months, although in exceptional cases the permit will be extended for an additional six months. After the period of training has ended, the workers may not remain in Spain in order to exercise another activity. Further provisions of the Resolution deal with when and where applications are to be presented, the form of applications and documentation accompanying applications, and notification of decisions on applications, among other things. full text

  19. Development of Coring Procedures Applied to Si, CdTe, and CIGS Solar Panels

    Energy Technology Data Exchange (ETDEWEB)

    Moutinho, Helio R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Johnston, Steven [National Renewable Energy Laboratory (NREL), Golden, CO (United States); To, Bobby [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jiang, Chun Sheng [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Xiao, Chuanxiao [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hacke, Peter L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moseley, John [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tynan, Gerald D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Al-Jassim, Mowafak M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dhere, N. G. [Florida Solar Energy Center

    2018-01-04

    Most of the research on the performance and degradation of photovoltaic modules is based on macroscale measurements of device parameters such as efficiency, fill factor, open-circuit voltage, and short-circuit current. Our goal is to develop the capabilities to allow us to study the degradation of these parameters in the micro- and nanometer scale and to relate our results to performance parameters. To achieve this objective, the first step is to be able to access small samples from specific areas of the solar panels without changing the properties of the material. In this paper, we describe two coring procedures that we developed and applied to Si, CIGS, and CdTe solar panels. In the first procedure, we cored full samples, whereas in the second we performed a partial coring that keeps the tempered glass intact. The cored samples were analyzed by different analytical techniques before and after coring, at the same locations, and no damage during the coring procedure was observed.

  20. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    Science.gov (United States)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  1. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  2. A multi-stage triaxial testing procedure for low permeable geomaterials applied to Opalinus Clay

    Directory of Open Access Journals (Sweden)

    Katrin M. Wild

    2017-06-01

    Full Text Available In many engineering applications, it is important to determine both effective rock properties and the rock behavior which are representative for the problem's in situ conditions. For this purpose, rock samples are usually extracted from the ground and brought to the laboratory to perform laboratory experiments such as consolidated undrained (CU triaxial tests. For low permeable geomaterials such as clay shales, core extraction, handling, storage, and specimen preparation can lead to a reduction in the degree of saturation and the effective stress state in the specimen prior to testing remains uncertain. Related changes in structure and the effect of capillary pressure can alter the properties of the specimen and affect the reliability of the test results. A careful testing procedure including back-saturation, consolidation and adequate shearing of the specimen, however, can overcome these issues. Although substantial effort has been devoted during the past decades to the establishment of a testing procedure for low permeable geomaterials, no consistent protocol can be found. With a special focus on CU tests on Opalinus Clay, this study gives a review of the theoretical concepts necessary for planning and validating the results during the individual testing stages (saturation, consolidation, and shearing. The discussed tests protocol is further applied to a series of specimens of Opalinus Clay to illustrate its applicability and highlight the key aspects.

  3. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  4. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    Science.gov (United States)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  5. A diagnostic procedure for applying the social-ecological systems framework in diverse cases

    Directory of Open Access Journals (Sweden)

    Jochen Hinkel

    2015-03-01

    Full Text Available The framework for analyzing sustainability of social-ecological systems (SES framework of Elinor Ostrom is a multitier collection of concepts and variables that have proven to be relevant for understanding outcomes in diverse SES. The first tier of this framework includes the concepts resource system (RS and resource units (RU, which are then further characterized through lower tier variables such as clarity of system boundaries and mobility. The long-term goal of framework development is to derive conclusions about which combinations of variables explain outcomes across diverse types of SES. This will only be possible if the concepts and variables of the framework can be made operational unambiguously for the different types of SES, which, however, remains a challenge. Reasons for this are that case studies examine other types of RS than those for which the framework has been developed or consider RS for which different actors obtain different kinds of RU. We explore these difficulties and relate them to antecedent work on common-pool resources and public goods. We propose a diagnostic procedure which resolves some of these difficulties by establishing a sequence of questions that facilitate the step-wise and unambiguous application of the SES framework to a given case. The questions relate to the actors benefiting from the SES, the collective goods involved in the generation of those benefits, and the action situations in which the collective goods are provided and appropriated. We illustrate the diagnostic procedure for four case studies in the context of irrigated agriculture in New Mexico, common property meadows in the Swiss Alps, recreational fishery in Germany, and energy regions in Austria. We conclude that the current SES framework has limitations when applied to complex, multiuse SES, because it does not sufficiently capture the actor interdependencies introduced through RS and RU characteristics and dynamics.

  6. Statistical and inventory procedures applied to nuclear-materials management. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Dresch, F.W.; Butterfield, P.H.; Kinderman, E.M.

    1966-04-01

    This report recommends centralized data reporting and analysis applied to inventory management and materials control. Adoption of this report will require a two-stage development program. In the first or study phase, a computer system (software) organization will take the lead role but will work closely with the AEC organization affected. The study phase, lasting three to four months, will delineate clearly the specific approaches to be taken, potential difficulties and advantages, costs, and the preliminary systems concept and specifications for the basic information system and procedures needed for statistical control and inventory management. The AEC, after review of the study phase, would normally proceed with detailed development, programming, and implementation in which it would be assisted by outside organizations, but in which it must play the dominant role. The basic information system needed for monitoring and statistical control of nuclear materials probably could be operable within a year from the start of the study phase. Implementation of more complex and specialized statistical and inventory management techniques would extend over another four to eight months.

  7. Autoregressive moving average (ARMA) model applied to quantification of cerebral blood flow using dynamic susceptibility contrast-enhanced magnetic resonance imaging

    International Nuclear Information System (INIS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki

    2003-01-01

    The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)

  8. PhysioSoft--an approach in applying computer technology in biofeedback procedures.

    Science.gov (United States)

    Havelka, Mladen; Havelka, Juraj; Delimar, Marko

    2009-09-01

    The paper presents description of original biofeedback computer program called PhysioSoft. It has been designed on the basis of the experience in development of biofeedback techniques of interdisciplinary team of experts of the Department of Health Psychology of the University of Applied Health Studies, Faculty of Electrical Engineering and Computing, University of Zagreb, and "Mens Sana", Private Biofeedback Practice in Zagreb. The interest in the possibility of producing direct and voluntary effects on autonomic body functions has gradually proportionately increased with the dynamics of abandoning the Cartesian model of body-mind relationship. The psychosomatic approach and studies carried out in the 50-ies of the 20th century, together with the research about conditioned and operant learning, have proved close inter-dependence between the physical and mental, and also the possibility of training the individual to consciously act on his autonomic physiological functions. The new knowledge has resulted in the development of biofeedback techniques around the 70-ies of the previous century and has been the base of many studies indicating the significance of biofeedback techniques in clinical practice concerned with many symptoms of health disorders. The digitalization of biofeedback instruments and development of user friendly computer software enable the use of biofeedback at individual level as an efficient procedure of a patient's active approach to self care of his own health. As the new user friendly computer software enables extensive accessibility of biofeedback instruments, the authors have designed the PhysioSoft computer program as a contribution to the development and broad use of biofeedback.

  9. GRUKON - A package of applied computer programs system input and operating procedures of functional modules

    International Nuclear Information System (INIS)

    Sinitsa, V.V.; Rineiskij, A.A.

    1993-04-01

    This manual describes a software package for the production of multigroup neutron cross-sections from evaluated nuclear data files. It presents the information necessary for the implementation of the program's modules in the framework of the execution of the program, including: operating procedures of the program, the data input, the macrocommand language, the assignment of the system's procedures. This report also presents the methodology used in the coding of the individual modules: the rules, the syntax, the method of procedures. The report also presents an example of the application of the data processing module. (author)

  10. 21 CFR 1.383 - What expedited procedures apply when FDA initiates a seizure action against a detained perishable...

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false What expedited procedures apply when FDA initiates a seizure action against a detained perishable food? 1.383 Section 1.383 Food and Drugs FOOD AND... Administrative Detention of Food for Human or Animal Consumption General Provisions § 1.383 What expedited...

  11. 25 CFR 900.58 - Do the same accountability and control procedures described above apply to Federal property?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Do the same accountability and control procedures described above apply to Federal property? 900.58 Section 900.58 Indians BUREAU OF INDIAN AFFAIRS... Organization Management Systems Property Management System Standards § 900.58 Do the same accountability and...

  12. Impression Procedures for Metal Frame Removable Partial Dentures as Applied by General Dental Practitioners.

    Science.gov (United States)

    Fokkinga, Wietske A; van Uchelen, Judith; Witter, Dick J; Mulder, Jan; Creugers, Nico H J

    2016-01-01

    This pilot study analyzed impression procedures for conventional metal frame removable partial dentures (RPDs). Heads of RPD departments of three dental laboratories were asked to record features of all incoming impressions for RPDs during a 2-month period. Records included: (1) impression procedure, tray type (stock/custom), impression material (elastomer/alginate), use of border-molding material (yes/no); and (2) RPD type requested (distal-extension/tooth-bounded/combination). Of the 132 total RPD impressions, 111 (84%) involved custom trays, of which 73 (55%) were combined with an elastomer. Impression border-molding material was used in 4% of the cases. Associations between impression procedure and RPD type or dentists' year/university of graduation were not found.

  13. Impression Procedures for Metal Frame Removable Partial Dentures as Applied by General Dental Practitioners.

    NARCIS (Netherlands)

    Fokkinga, W.A.; Uchelen, J. van; Witter, D.J.; Mulder, J.; Creugers, N.H.J.

    2016-01-01

    This pilot study analyzed impression procedures for conventional metal frame removable partial dentures (RPDs). Heads of RPD departments of three dental laboratories were asked to record features of all incoming impressions for RPDs during a 2-month period. Records included: (1) impression

  14. THE PROCEDURE APPLIED IN TRANSLATING JARGON IN ENGLISH PARLIAMENTARY DEBATING INTO INDONESIAN

    Directory of Open Access Journals (Sweden)

    Ni Luh Putu Krisnawati

    2017-05-01

    Full Text Available At present, competition regarding English debating is a common thing. All countries are competing in the World Debating Competition either for high school or university level. The spread of this “popular culture” has made other country to adopt the English debating system and translate that system into their native language. However it cannot be denied that there are also many jargons that need to be translated into the native language without changing the meaning. This research is focused on the jargons of the English parliamentary debating and its translation into Indonesia. The aims of this study are to identify the jargons in English parliamentary debating and its equivalence in Indonesia and also to know the procedures used in translating the jargons in English parliamentary debating into Indonesia. The theory used for this study is the theory proposed by Peter Newmark (1988 regarding the procedure of translation. The findings shows that they are five procedure of translation used in translating the jargons of English parliamentary debating into Indonesia namely literal translation, functional equivalent, couplets, transference, and naturalization.

  15. Procedures to evaluate the efficiency of protective clothing worn by operators applying pesticide.

    Science.gov (United States)

    Espanhol-Soares, Melina; Nociti, Leticia A S; Machado-Neto, Joaquim Gonçalves

    2013-10-01

    The evaluation of the efficiency of whole-body protective clothing against pesticides has already been carried out through field tests and procedures defined by international standards, but there is a need to determine the useful life of these garments to ensure worker safety. The aim of this article is to compare the procedures for evaluating efficiency of two whole-body protective garments, both new and previously used by applicators of herbicides, using a laboratory test with a mannequin and in the field with the operator. The evaluation of the efficiency of protective clothing used both quantitative and qualitative methodologies, leading to a proposal for classification according to efficiency, and determination of the useful life of protective clothing for use against pesticides, based on a quantitative assessment. The procedures used were in accordance with the standards of the modified American Society for Testing and Materials (ASTM) F 1359:2007 and International Organization for Standardization 17491-4. The protocol used in the field was World Health Organization Vector Biology and Control (VBC)/82.1. Clothing tested was personal water repellent and pesticide protective. Two varieties of fabric were tested: Beige (100% cotton) and Camouflaged (31% polyester and 69% cotton). The efficiency in exposure control of the personal protective clothing was measured before use and after 5, 10, 20, and 30 uses and washes under field conditions. Personal protective clothing was worn by workers in the field during the application of the herbicide glyphosate on weed species in mature sugar cane plantations using a knapsack sprayer. The modified ASTM 1359:2007 procedure was chosen as the most appropriate due to its greater repeatability (lower coefficient of variation). This procedure provides quantitative evaluation needed to determine the efficiency and useful life of individual protective clothing, not just at specific points of failure, but according to dermal

  16. Procedure of qualification applied to motors driving auxiliaries in fossil fired and nuclear power plants

    International Nuclear Information System (INIS)

    Coperchini, C.; Fises, A.

    1984-01-01

    Twenty year operation have enabled EDF to better understand the factors improving the reliability of powerhouse auxiliary drive induction motors. Progress in the behaviour of such machines are mainly due to analysis and handling of full size test results achieved in the Saint-Denis Motor Test Laboratory. This work led to the printing of recommendations and technical specifications. Service and safety requirements of the nuclear plant new generation lead to examine again the procedures of qualification. The analysis made in this report let appear the justification to maintain the present EDF policy with some necessary adjustments, especially as far as the nuclear safety motors are concerned [fr

  17. Innovization procedure applied to a multi-objective optimization of a biped robot locomotion

    Science.gov (United States)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2013-10-01

    This paper proposes an Innovization procedure approach for a bio-inspired biped gait locomotion controller. We combine a multi-objective evolutionary algorithm and a bio-inspired Central Patterns Generator locomotion controller to generates the necessary limb movements to perform the walking gait of a biped robot. The search for the best set of CPG parameters is optimized by considering multiple objectives along a staged evolution. An innovation analysis is issued to verify relationships between the parameters and the objectives and between objectives themselves in order to find relevant motor behaviors characteristics. The simulation results show the effectiveness of the proposed approach.

  18. Calculation of the information content of retrieval procedures applied to mass spectral data bases

    International Nuclear Information System (INIS)

    Marlen, G. van; Dijkstra, A.; Van't Klooster, H.A.

    1979-01-01

    A procedure has been developed for estimating the information content of retrieval systems with binary-coded mass spectra, as well as mass spectra coded by other methods, from the statistical properties of a reference file. For a reference file, binary-coded with a threshold of 1% of the intensity of the base peak, this results typically in an estimated information content of about 50 bits for 200 selected m/z values. It is shown that, because of errors occuring in the binary spectra, the actual information content is only about 12 bits. This explains the poor performance observed for retrieval systems with binary-coded mass spectra. (Auth.)

  19. HIGH QUALITY ENVIRONMENTAL PRINCIPLES APPLIED TO THE ARCHITECTONIC DESIGN SELECTION PROCEDURE: THE NUTRE LAB CASE

    Directory of Open Access Journals (Sweden)

    Claudia Barroso Krause

    2012-06-01

    Full Text Available The need to produce more sustainable buildings has been influencing the design decisions all over the world. That’s why it is imperative, in Brazil, the development of strategies and method to aid the decision making during the design process, focused on high quality environmental. This paper presents a decision support tool based on the principles of sustainable construction developed by the Project, Architecture and Sustainability Research Group (GPAS of Federal University of Rio de Janeiro – Brazil. The methodology has been developed for the selection of a preliminary design of a laboratory to be built at Rio Technology Park at the University campus. The support provided by GPAS occurred in three stages: the elaboration of the Reference Guide for the competitors, the development of a methodology to evaluate the proposed solutions (based on environmental performance criteria and the assistance of the members of jury in the trial phase. The theoretical framework was based upon the concepts of the bioclimatic architecture, the procedures specified by the certification HQE® (Haute Qualité Environnementale and the method suggested by the ADDENDA® architecture office. The success of this experience points out the possibility to future application in similar cases.

  20. Applying the Sports Medicine Australia pre-exercise screening procedures: who will be excluded?

    Science.gov (United States)

    Norton, K; Olds, T; Bowes, D; Van Ly, S; Gore, C

    1998-01-01

    Recently Sports Medicine Australia (SMA) and the Australian Association for Exercise and Sport Science (AAESS) developed guidelines for pre-exercise screening and supervision of fitness testing, based on the American College of Sports Medicine (ACSM) system. The procedure involves classifying individuals into one of three risk groups (apparently healthy, at higher risk, with known disease). Using data collected in a 1992 survey of 2298 Australian adults aged 18-78 years conducted by the Department of the Arts, Sport, the Environment and Territories (DASET), we calculated the percentage of the general population falling within each risk group and therefore exclusion rates (ie the proportion of subjects who, it is recommended, would require medical clearance prior to exercise or exercise testing). The analysis of data found that between 43-73% of males and 44-61% of females would require clearance. A cost analysis suggests that a rigorous application of the SMA-AAESS guidelines would cost between $250 million and $1.2 billion each year. On the basis of the results, suggestions for reviewing the guidelines have been proposed.

  1. Procedural and developmental aspects of a multielement automatic radiochemical machine, applied to neutron irradiated biomedical samples

    International Nuclear Information System (INIS)

    Iyengar, G.V.

    1976-06-01

    This report is intended to serve as a practical guide, elaborately describing the working details and some developmental work connected with an automatic multielement radiochemical machine based on thermal neutron activation analysis using ion exchange and partition chromatography. Some of the practical aspects and personal observations after much experience with this versatile multielement method, applied to investigate the elemental composition of different biomedical matrices, are summarized. Standard reference materials are analyzed, and the data are presented with a set of gamma-spectra obtained before and after chemical separation into convenient groups suitable for gamma spectroscopy. The samples analyzed included various human and animal tissues, body fluids, IAEA biological standard reference materials, and samples from the WHO/IAEA project on 'Trace elements in relation to cardiovascular diseases'. Simplified modifications of the radiochemical processing, suitable for fast and routine analysis of clinical samples have also been discussed. (orig.) [de

  2. Applying A Multi-Objective Based Procedure to SWAT Modelling in Alpine Catchments

    Science.gov (United States)

    Tuo, Y.; Disse, M.; Chiogna, G.

    2017-12-01

    In alpine catchments, water management practices can lead to conflicts between upstream and downstream stakeholders, like in the Adige river basin (Italy). A correct prediction of available water resources plays an important part, for example, in defining how much water can be stored for hydropower production in upstream reservoirs without affecting agricultural activities downstream. Snow is a crucial hydrological component that highly affects seasonal behavior of streamflow. Therefore, a realistic representation of snow dynamics is fundamental for water management operations in alpine catchments. The Soil and Water Assessment Tool (SWAT) model has been applied in alpine catchments worldwide. However, during model calibration of catchment scale applications, snow parameters were generally estimated based on streamflow records rather than on snow measurements. This may lead to streamflow predictions with wrong snow melt contribution. This work highlights the importance of considering snow measurements in the calibration of the SWAT model for alpine hydrology and compares various calibration methodologies. In addition to discharge records, snow water equivalent time series of both subbasin scale and monitoring station were also utilized to evaluate the model performance by comparing with the SWAT subbasin and elevation band snow outputs. Comparing model results obtained calibrating the model using discharge data only and discharge data along with snow water equivalent data, we show that the latter approach allows us to improve the reliability of snow simulations while maintaining good estimations of streamflow. With a more reliable representation of snow dynamics, the hydrological model can provide more accurate references for proposing adequate water management solutions. This study offers to the wide SWAT user community an effective approach to improve streamflow predictions in alpine catchments and hence support decision makers in water allocation.

  3. Calculation procedure to determine average mass transfer coefficients in packed columns from experimental data for ammonia-water absorption refrigeration systems

    Energy Technology Data Exchange (ETDEWEB)

    Sieres, Jaime; Fernandez-Seara, Jose [University of Vigo, Area de Maquinas y Motores Termicos, E.T.S. de Ingenieros Industriales, Vigo (Spain)

    2008-08-15

    The ammonia purification process is critical in ammonia-water absorption refrigeration systems. In this paper, a detailed and a simplified analytical model are presented to characterize the performance of the ammonia rectification process in packed columns. The detailed model is based on mass and energy balances and simultaneous heat and mass transfer equations. The simplified model is derived and compared with the detailed model. The range of applicability of the simplified model is determined. A calculation procedure based on the simplified model is developed to determine the volumetric mass transfer coefficients in the vapour phase from experimental data. Finally, the proposed model and other simple calculation methods found in the general literature are compared. (orig.)

  4. Radiochromic film for dosimetric measurements in radiation shielding composites synthesized for applied in radiology procedures of high dose

    Energy Technology Data Exchange (ETDEWEB)

    Fontainha, C. C. P. [Universidade Federal de Minas Gerais, Departamento de Engenharia Nuclear, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Baptista N, A. T.; Faria, L. O., E-mail: crissia@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    Full text: Medical radiology offers great benefit to patients. However, although specifics procedures of high dose, as fluoroscopy, Interventional Radiology, Computed Tomography (CT) make up a small percent of the imaging procedures, they contribute to significantly increase dose to population. The patients may suffer tissue damage. The probability of deterministic effects incidence depends on the type of procedure performed, exposure time, and the amount of applied dose at the irradiated area. Calibrated radiochromic films can identify size and distribution of the radiated fields and measure intensities of doses. Radiochromic films are sensitive for doses ranging from 0.1 to 20 c Gy and they have the same response for X-rays effective energies ranging from 20 to 100 keV. New radiation attenuators materials have been widely investigated resulting in dose reduction entrance skin dose. In this work, Bi{sub 2}O{sub 3} and ZrO{sub 2}:8 % Y{sub 2}O{sub 3} composites were obtained by mixing them with P(VDF-Tr Fe) copolymers matrix from casting method and then characterized by Ftir. Dosimetric measurements were obtained with Xr-Q A2 Gafchromic radiochromic films. In this setup, one radiochromic film is directly exposed to the X-rays beam and another one measures the attenuated beam were exposed to an absorbed dose of 10 mGy of RQR5 beam quality (70 kV X-ray beam). Under the same conditions, irradiated Xr-Q A2 films were stored and scanned measurement in order to obtain a more reliable result. The attenuation factors, evaluated by Xr-Q A2 radiochromic films, indicate that both composites are good candidates for use as patient radiation shielding in high dose medical procedures. (Author)

  5. Text mining applied to electronic cardiovascular procedure reports to identify patients with trileaflet aortic stenosis and coronary artery disease.

    Science.gov (United States)

    Small, Aeron M; Kiss, Daniel H; Zlatsin, Yevgeny; Birtwell, David L; Williams, Heather; Guerraty, Marie A; Han, Yuchi; Anwaruddin, Saif; Holmes, John H; Chirinos, Julio A; Wilensky, Robert L; Giri, Jay; Rader, Daniel J

    2017-08-01

    Interrogation of the electronic health record (EHR) using billing codes as a surrogate for diagnoses of interest has been widely used for clinical research. However, the accuracy of this methodology is variable, as it reflects billing codes rather than severity of disease, and depends on the disease and the accuracy of the coding practitioner. Systematic application of text mining to the EHR has had variable success for the detection of cardiovascular phenotypes. We hypothesize that the application of text mining algorithms to cardiovascular procedure reports may be a superior method to identify patients with cardiovascular conditions of interest. We adapted the Oracle product Endeca, which utilizes text mining to identify terms of interest from a NoSQL-like database, for purposes of searching cardiovascular procedure reports and termed the tool "PennSeek". We imported 282,569 echocardiography reports representing 81,164 individuals and 27,205 cardiac catheterization reports representing 14,567 individuals from non-searchable databases into PennSeek. We then applied clinical criteria to these reports in PennSeek to identify patients with trileaflet aortic stenosis (TAS) and coronary artery disease (CAD). Accuracy of patient identification by text mining through PennSeek was compared with ICD-9 billing codes. Text mining identified 7115 patients with TAS and 9247 patients with CAD. ICD-9 codes identified 8272 patients with TAS and 6913 patients with CAD. 4346 patients with AS and 6024 patients with CAD were identified by both approaches. A randomly selected sample of 200-250 patients uniquely identified by text mining was compared with 200-250 patients uniquely identified by billing codes for both diseases. We demonstrate that text mining was superior, with a positive predictive value (PPV) of 0.95 compared to 0.53 by ICD-9 for TAS, and a PPV of 0.97 compared to 0.86 for CAD. These results highlight the superiority of text mining algorithms applied to electronic

  6. 41 CFR 102-5.65 - What procedures apply when the need for home-to-work transportation exceeds the initial period?

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What procedures apply when the need for home-to-work transportation exceeds the initial period? 102-5.65 Section 102-5.65 Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION GENERAL 5-HOME-TO-WOR...

  7. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... to a non-linear manifold and re-normalization or orthogonalization must be applied to obtain proper rotations. These latter steps have been viewed as ad hoc corrections for the errors introduced by assuming a vector space. The article shows that the two approximative methods can be derived from natural...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation....

  8. Virtual reality, augmented reality, and robotics applied to digestive operative procedures: from in vivo animal preclinical studies to clinical use

    Science.gov (United States)

    Soler, Luc; Marescaux, Jacques

    2006-04-01

    Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.

  9. Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.

    2017-01-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…

  10. Averaging in cosmological models

    OpenAIRE

    Coley, Alan

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. We review cosmological observations and discuss some of the issues regarding averaging. We present a precise definition of a cosmological model and a rigorous mathematical definition of averaging, based entirely in terms of scalar invariants.

  11. Evaluation of Flocculation and Filtration Procedures Applied to WSRC Sludge: A Report from B. Yarar, Colorado School of Mines

    International Nuclear Information System (INIS)

    Poirier, M.R.

    2001-01-01

    This report, addresses fundamentals of flocculation processes shedding light on why WSRC researchers have not been able to report the discovery of a successful flocculant and acceptable filtration rates. It also underscores the importance of applying an optimized flocculation-testing regime, which has not been adopted by these researchers. The final part of the report proposes a research scheme which should lead to a successful choice of flocculants, filtration aids (surfactants) and a filtration regime, as well recommendations for work that should be carried out to make up for the deficiencies of the limited WSRC work where a better performance should be the outcome

  12. Evaluation of Flocculation and Filtration Procedures Applied to WSRC Sludge: A Report from B. Yarar, Colorado School of Mines

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M.R.

    2001-06-04

    This report, addresses fundamentals of flocculation processes shedding light on why WSRC researchers have not been able to report the discovery of a successful flocculant and acceptable filtration rates. It also underscores the importance of applying an optimized flocculation-testing regime, which has not been adopted by these researchers. The final part of the report proposes a research scheme which should lead to a successful choice of flocculants, filtration aids (surfactants) and a filtration regime, as well recommendations for work that should be carried out to make up for the deficiencies of the limited WSRC work where a better performance should be the outcome.

  13. Parametric analysis applied to perforating procedures of oil wells; Analise parametrica aplicada a procedimentos de canhoneio de pocos de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Baioco, Juliana Souza; Seckler, Carolina dos Santos; Silva, Karinna Freitas da; Jacob, Breno Pinheiro [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Metodos Computacionais e Sistemas Offshore; Silvestre, Jose Roberto; Soares, Antonio Claudio; Freitas, Sergio Murilo Santos [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas

    2008-07-01

    The perforation process is an important in well construction. It provides contact between the reservoir rock and the well, allowing oil production. The procedure consists in using explosive charges to bore a hole into the casing and the rock, so that the reservoir fluid can flow to the well. Therefore, the right choice of both the gun and the charge type is extremely important, knowing that many factors influence on the process, affecting the productivity, such as shot density, penetration depth, hole diameter, etc. The objective of this paper is to present the results of some parametric study to evaluate the influence of some parameters related to the explosive charges on well productivity, since there are many types of charges with different properties, which provide specific characteristics to the perforated area. For that purpose, a commercial program will be used, which allows the simulation of the flow problem, along with a finite element mesh generator that uses a pre-processor and a program that enables the construction of reservoir, well and perforation models. It can be observed that the penetration depth has bigger influence than the hole diameter, being an important factor when choosing the charge to be used in the project. (author)

  14. Objectively-assessed outcome measures: a translation and cross-cultural adaptation procedure applied to the Chedoke McMaster Arm and Hand Activity Inventory (CAHAI

    Directory of Open Access Journals (Sweden)

    Hahn Sabine

    2010-11-01

    Full Text Available Abstract Background Standardised translation and cross-cultural adaptation (TCCA procedures are vital to describe language translation, cultural adaptation, and to evaluate quality factors of transformed outcome measures. No TCCA procedure for objectively-assessed outcome (OAO measures exists. Furthermore, no official German version of the Canadian Chedoke Arm and Hand Activity Inventory (CAHAI is available. Methods An eight-step for TCCA procedure for OAO was developed (TCCA-OAO based on the existing TCCA procedure for patient-reported outcomes. The TCCA-OAO procedure was applied to develop a German version of the CAHAI (CAHAI-G. Inter-rater reliability of the CAHAI-G was determined through video rating of CAHAI-G. Validity evaluation of the CAHAI-G was assessed using the Chedoke-McMaster Stroke Assessment (CMSA. All ratings were performed by trained, independent raters. In a cross-sectional study, patients were tested within 31 hours after the initial CAHAI-G scoring, for their motor function level using the subscales for arm and hand of the CMSA. Inpatients and outpatients of the occupational therapy department who experienced a cerebrovascular accident or an intracerebral haemorrhage were included. Results Performance of 23 patients (mean age 69.4, SD 12.9; six females; mean time since stroke onset: 1.5 years, SD 2.5 years have been assessed. A high inter-rater reliability was calculated with ICCs for 4 CAHAI-G versions (13, 9, 8, 7 items ranging between r = 0.96 and r = 0.99 (p Conclusions The TCCA-OAO procedure was validated regarding its feasibility and applicability for objectively-assessed outcome measures. The resulting German CAHAI can be used as a valid and reliable assessment for bilateral upper limb performance in ADL in patients after stroke.

  15. Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders.

    Science.gov (United States)

    Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S

    2017-07-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.

  16. Analysis of Influence of the Thermal Dependence of Air Thermophysical Properties on the Accuracy of Simulation of Heat Transfer in a Turbulent Flow in Case of Applying Different Methods of Averaging Navier-Stokes Equations

    Directory of Open Access Journals (Sweden)

    A. D. Kliukvin

    2014-01-01

    Full Text Available There is theoretically investigated the influence of thermal dependence of air thermophysical properties on accuracy of heat transfer problems solution in a turbulent flow when using different methods of averaging the Navier-Stokes equations.There is analyzed the practicability of using particular method of averaging the NavierStokes equations when it’s necessary to clarify the solution of heat transfer problem taking into account the variability of air thermophysical properties.It’s shown that Reynolds and Favre averaging (the most common methods of averaging the Navier-Stokes equations are not effective in this case because these methods inaccurately describe behavior of large scale turbulent structures which strongly depends on geometry of particular flow. Thus it’s necessary to use more universal methods of turbulent flow simulation which are not based on averaging of all turbulent scales.In the article it’s shown that instead of Reynold and Favre averaging it’s possible to use large eddy simulation whereby turbulent structures are divided into small-scale and large-scale ones with subsequent modelling of small-scale ones only. But this approach leads to the necessarity of increasing the computational power by 2-3 orders.For different methods of averaging the form of additional terms of averaged Navier-Stokes equations in case of accounting pulsation of thermophysical properties of the air is obtained.On the example of a submerged heated air jet the errors (which occur when neglecting the thermal dependence of air thermophysical properties on averaged flow temperature in determination of convectional and conductive components of heat flux and viscous stresses are evaluated. It’s shown that the greatest increase of solution accuracy can be obtained in case of the flows with high temperature gradients.Finally using infinite Teylor series it’s found that underestimation of convective and conductive components of heat flux and

  17. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  18. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Science.gov (United States)

    Fehl, D. L.; Chandler, G. A.; Stygar, W. A.; Olson, R. E.; Ruiz, C. L.; Hohlfelder, J. J.; Mix, L. P.; Biggs, F.; Berninger, M.; Frederickson, P. O.; Frederickson, R.

    2010-12-01

    An algorithm for spectral reconstructions (unfolds) and spectrally integrated flux estimates from data obtained by a five-channel, filtered x-ray-detector array (XRD) is described in detail and characterized. This diagnostic is a broad-channel spectrometer, used primarily to measure time-dependent soft x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA), and serves as both a plasma probe and a gauge of accelerator performance. The unfold method, suitable for online analysis, arises naturally from general assumptions about the x-ray source and spectral properties of the channel responses; a priori constraints control the ill-posed nature of the inversion. The unfolded spectrum is not assumed to be Planckian. This study is divided into two consecutive papers. This paper considers three major issues: (a) Formulation of the unfold method.—The mathematical background, assumptions, and procedures leading to the algorithm are described: the spectral reconstruction Sunfold(E,t)—five histogram x-ray bins j over the x-ray interval, 137≤E≤2300eV at each time step t—depends on the shape and overlap of the calibrated channel responses and on the maximum electrical power delivered to the plasma. The x-ray flux Funfold is estimated as ∫Sunfold(E,t)dE. (b) Validation with simulations.—Tests of the unfold algorithm with known static and time-varying spectra are described. These spectra included—but were not limited to—Planckian spectra Sbb(E,T) (25≤T≤250eV), from which noise-free channel data were simulated and unfolded. For Planckian simulations with 125≤T≤250eV and typical responses, the binwise unfold values Sj and the corresponding binwise averages ⟨Sbb⟩j agreed to ˜20%, except where Sbb≪max⁡{Sbb}. Occasionally, unfold values Sj≲0 (artifacts) were encountered. The algorithm recovered ≳90% of the x-ray flux over the wider range, 75≤T≤250eV. For lower T, the

  19. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    Full Text Available An algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by a five-channel, filtered x-ray-detector array (XRD is described in detail and characterized. This diagnostic is a broad-channel spectrometer, used primarily to measure time-dependent soft x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA, and serves as both a plasma probe and a gauge of accelerator performance. The unfold method, suitable for online analysis, arises naturally from general assumptions about the x-ray source and spectral properties of the channel responses; a priori constraints control the ill-posed nature of the inversion. The unfolded spectrum is not assumed to be Planckian. This study is divided into two consecutive papers. This paper considers three major issues: (a Formulation of the unfold method.—The mathematical background, assumptions, and procedures leading to the algorithm are described: the spectral reconstruction S_{unfold}(E,t—five histogram x-ray bins j over the x-ray interval, 137≤E≤2300  eV at each time step t—depends on the shape and overlap of the calibrated channel responses and on the maximum electrical power delivered to the plasma. The x-ray flux F_{unfold} is estimated as ∫S_{unfold}(E,tdE. (b Validation with simulations.—Tests of the unfold algorithm with known static and time-varying spectra are described. These spectra included—but were not limited to—Planckian spectra S_{bb}(E,T (25≤T≤250  eV, from which noise-free channel data were simulated and unfolded. For Planckian simulations with 125≤T≤250  eV and typical responses, the binwise unfold values S_{j} and the corresponding binwise averages ⟨S_{bb}⟩_{j} agreed to ∼20%, except where S_{bb}≪max⁡{S_{bb}}. Occasionally, unfold values S_{j}≲0 (artifacts were encountered. The algorithm recovered ≳90% of the x

  20. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...

  1. Modified Approach to Stroke Rehabilitation (MAStR): feasibility study of a method to apply procedural memory concepts to transfer training.

    Science.gov (United States)

    Pavol, Marykay A; Bassile, Clare C; Lehman, Jennifer R; Harmon, Emma; Ferreira, Nancy; Shinn, Brittany; St James, Nancy; Callender, Jacqueline; Stein, Joel

    2018-04-03

    Training and implementation for a multidisciplinary stroke rehabilitation method emphasizing procedural memory. Current practice in stroke rehabilitation relies on explicit memory, often compromised by stroke, failing to capitalize on better-preserved procedural memory skills. Recruitment of procedural memory requires consistency and practice, characteristics difficulty to promote on inpatient rehabilitation units. We designed a method Modified Approach to Stroke Rehabilitation (MAStR) to maximize consistency and practice for transfer training with stroke patients. Phase I, single-group study. MAStR has two innovations: (1) simplification of instructions to only three words, other direction provided non-verbally; (2) having all rehabilitation staff apply the same approach for transfers. Staff training in MAStR included review of written material describing the rationale for MAStR and demonstration of a transfer using MAStR. Enrolled patients completed each transfer with MAStR in addition to standard rehabilitation therapy. The MAStR method was taught to a large, multidisciplinary rehabilitation staff (n = 31). Training and certification required 15 min per staff member. Five stroke patients were enrolled. No transfers with MAStR resulted in injury, no negative feedback was received from staff or patients. Staff reported satisfaction with the brief MAStR training and reported transfers were easier to complete with the MAStR method. Feasibility was demonstrated for an innovative application of procedural memory concepts to stroke rehabilitation. All rehabilitation disciplines were successfully trained. MAStR was well-tolerated and liked by rehabilitation staff and patients. These results support pursuit of a Phase II pilot study.

  2. Quantization Procedures

    International Nuclear Information System (INIS)

    Cabrera, J. A.; Martin, R.

    1976-01-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

  3. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  4. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    is less clear if the teacher distribution is unknown. I define a class of averaging procedures, the temperated likelihoods, including both Bayes averaging with a uniform prior and maximum likelihood estimation as special cases. I show that Bayes is generalization optimal in this family for any teacher...

  5. Averaging Einstein's equations : The linearized case

    NARCIS (Netherlands)

    Stoeger, William R.; Helmi, Amina; Torres, Diego F.

    We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW

  6. Dynamic logistic regression and dynamic model averaging for binary classification.

    Science.gov (United States)

    McCormick, Tyler H; Raftery, Adrian E; Madigan, David; Burd, Randall S

    2012-03-01

    We propose an online binary classification procedure for cases when there is uncertainty about the model to use and parameters within a model change over time. We account for model uncertainty through dynamic model averaging, a dynamic extension of Bayesian model averaging in which posterior model probabilities may also change with time. We apply a state-space model to the parameters of each model and we allow the data-generating model to change over time according to a Markov chain. Calibrating a "forgetting" factor accommodates different levels of change in the data-generating mechanism. We propose an algorithm that adjusts the level of forgetting in an online fashion using the posterior predictive distribution, and so accommodates various levels of change at different times. We apply our method to data from children with appendicitis who receive either a traditional (open) appendectomy or a laparoscopic procedure. Factors associated with which children receive a particular type of procedure changed substantially over the 7 years of data collection, a feature that is not captured using standard regression modeling. Because our procedure can be implemented completely online, future data collection for similar studies would require storing sensitive patient information only temporarily, reducing the risk of a breach of confidentiality. © 2011, The International Biometric Society.

  7. Applying Petroleum the Pressure Buildup Well Test Procedure on Thermal Response Test—A Novel Method for Analyzing Temperature Recovery Period

    Directory of Open Access Journals (Sweden)

    Tomislav Kurevija

    2018-02-01

    Full Text Available The theory of Thermal Response Testing (TRT is a well-known part of the sizing process of the geothermal exchange system. Multiple parameters influence the accuracy of effective ground thermal conductivity measurement; like testing time, variable power, climate interferences, groundwater effect, etc. To improve the accuracy of the TRT, we introduced a procedure to additionally analyze falloff temperature decline after the power test. The method is based on a premise of analogy between TRT and petroleum well testing, since the origin of both procedures lies in the diffusivity equation with solutions for heat conduction or pressure analysis during radial flow. Applying pressure build-up test interpretation techniques to borehole heat exchanger testing, greater accuracy could be achieved since ground conductivity could be obtained from this period. Analysis was conducted on a coaxial exchanger with five different power steps, and with both direct and reverse flow regimes. Each test was set with 96 h of classical TRT, followed by 96 h of temperature decline, making for almost 2000 h of cumulative borehole testing. Results showed that the ground conductivity value could vary by as much as 25%, depending on test time, seasonal period and power fluctuations, while the thermal conductivity obtained from the falloff period provided more stable values, with only a 10% value variation.

  8. Comparative Characteristics of the Results of Evacuation to Healthcare Facilities and Treatment Outcomes of Children Who Applied for First Aid With Acute Abdominal Pains. The Case of an Emergency Medical Setting of an Average Municipal Entity

    OpenAIRE

    Ekaterina А. Romanova; Leyla S. Namazova-Baranova; Elena Yu. Dyakonova; Aleksey Yu. Romanov; Kazbek S. Mezhidov; Zharadat I. Dohshukaeva

    2017-01-01

    Background. Despite the active development of diagnostic capabilities, the problems of diagnosis at the pre-hospital stage with abdominal pain remain unresolved. Objective. Our aim was to analyze the results of evacuation to healthcare facilities as well as treatment outcomes (conservative and surgical) of hospitalized children who applied for first aid with acute abdominal pain, in order to identify possible shortcomings in the existing diagnostic algorithm and its optimization. Methods. The...

  9. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Arithmetic mean of objects in a space need not lie in the space. [Frechet; 1948] Finding mean of right-angled triangles. S = {(x,y,z) ∈ R+3 : x2 + y2 = z2}. = {. [ z x − ιy x + ιy z. ] : x,y,z > 0,z2 = x2 + y2}. Surface of right triangles : Arithmetic mean not on S. Tanvi Jain. Averaging operations on matrices ...

  10. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. ... then the expected extension of geometric mean A1/2B1/2 is not even self-adjoint, leave alone positive definite. Tanvi Jain. Averaging operations on matrices ...

  11. Rescuing Collective Wisdom when the Average Group Opinion Is Wrong

    Directory of Open Access Journals (Sweden)

    Andres Laan

    2017-11-01

    Full Text Available The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain inaccessible to us unless we are able to find efficient knowledge aggregation methods that produce reliable decisions based on the behavior or opinions of the collective’s members. It is often stated that simple averaging of a pool of opinions is a good and in many cases the optimal way to extract knowledge from a crowd. The method of averaging has been applied to analysis of decision-making in very different fields, such as forecasting, collective animal behavior, individual psychology, and machine learning. Two mathematical theorems, Condorcet’s theorem and Jensen’s inequality, provide a general theoretical justification for the averaging procedure. Yet the necessary conditions which guarantee the applicability of these theorems are often not met in practice. Under such circumstances, averaging can lead to suboptimal and sometimes very poor performance. Practitioners in many different fields have independently developed procedures to counteract the failures of averaging. We review such knowledge aggregation procedures and interpret the methods in the light of a statistical decision theory framework to explain when their application is justified. Our analysis indicates that in the ideal case, there should be a matching between the aggregation procedure and the nature of the knowledge distribution, correlations, and associated error costs. This leads us to explore how machine learning techniques can be used to extract near-optimal decision rules in a data-driven manner. We end with a discussion of open frontiers in the domain of knowledge aggregation and collective intelligence in general.

  12. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  13. THE VALUE OF REMOVING DAILY OBSTACLES VIA EVERYDAY PROBLEM SOLVING THEORY: DEVELOPING AN APPLIED NOVEL PROCEDURE TO INCREASE SELF-EFFICACY FOR EXERCISE

    Directory of Open Access Journals (Sweden)

    Daniele eArtistico

    2013-01-01

    Full Text Available The objective of the study was to develop a novel procedure to increase self-efficacy for exercise. Gains in one’s ability to resolve day-to-day obstacles for entering an exercise routine were expected to cause an increase in self-efficacy for exercise. Fifty-five sedentary participants (did not exercise regularly for at least 4 months prior to the study who expressed an intention to exercise in the near future were selected for the study. Participants were randomly assigned to one of three conditions: 1 an Experimental Group in which they received a problem-solving training session to learn new strategies for solving day-to-day obstacles that interfere with exercise, 2 a Control Group with Problem Solving Training which received a problem solving training session focused on a typical day-to-day problem unrelated to exercise, or 3 a Control Group which did not receive any problem-solving training. Assessment of obstacles to exercise and perceived self-efficacy for exercise were conducted at baseline; perceived self-efficacy for exercise was reassessed post-intervention (one week later. No differences in perceived challenges posed by obstacles to exercise or self-efficacy for exercise were observed across groups at baseline. The Experimental Group reported greater improvement in self-efficacy for exercise compared to the Control Group with Training (p < 0.01 and the Control Group (p < 0.01. Results of this study suggest that a novel procedure that focuses on removing obstacles to intended planned fitness activities is effective in increasing self-efficacy to engage in exercise among sedentary adults. Implications of these findings for use in applied settings and treatment studies are discussed.

  14. Procedure Selection and Patient Positioning Influence Spine Kinematics During High-Velocity, Low-Amplitude Spinal Manipulation Applied to the Low Back.

    Science.gov (United States)

    Bell, Spencer; D'Angelo, Kevin; Kawchuk, Gregory N; Triano, John J; Howarth, Samuel J

    This investigation compared indirect 3-dimensional angular kinematics (position, velocity, and acceleration) of the lumbar spine for 2 different high-velocity, low-amplitude (HVLA) spinal manipulation procedures (lumbar spinous pull or push), and altered initial patient lower limb posture. Twenty-four participants underwent 6 HVLA procedures directed toward the presumed L4 vertebra, reflecting each combination of 2 variants of a spinal manipulation application technique (spinous pull and push) and 3 initial hip flexion angles (0°, 45°, and 90°) applied using a right lateral recumbent patient position. All contact forces and moments between the patient and the external environment, as well as 3-dimensional kinematics of the patient's pelvis and thorax, were recorded. Lumbar spine angular positions, velocities, and accelerations were analyzed within the preload and impulse stages of each HVLA trial. Lumbar spine left axial rotation was greater for the pull HVLA. The pull HVLA also generated a greater maximum (leftward) and lower minimum (rightward) axial rotation velocity and deceleration and greater leftward and rightward lateral bend velocities, acceleration, and deceleration components. Not flexing the hip produced the greatest amount of extension, as well as the lowest axial rotation and maximum axial rotation acceleration during the impulse. This investigation provides basic kinematic information for clinicians to understand the similarities and differences between 2 HVLA side-lying manipulations in the lumbar spine. Use of these findings and novel technology can drive future research initiatives that can both affect clinical decision making and influence teaching environments surrounding spinal manipulative therapy skill acquisition. Copyright © 2017. Published by Elsevier Inc.

  15. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  16. Comparative Characteristics of the Results of Evacuation to Healthcare Facilities and Treatment Outcomes of Children Who Applied for First Aid With Acute Abdominal Pains. The Case of an Emergency Medical Setting of an Average Municipal Entity

    Directory of Open Access Journals (Sweden)

    Ekaterina А. Romanova

    2017-01-01

    Full Text Available Background. Despite the active development of diagnostic capabilities, the problems of diagnosis at the pre-hospital stage with abdominal pain remain unresolved. Objective. Our aim was to analyze the results of evacuation to healthcare facilities as well as treatment outcomes (conservative and surgical of hospitalized children who applied for first aid with acute abdominal pain, in order to identify possible shortcomings in the existing diagnostic algorithm and its optimization. Methods. The results of treatment outcomes for children with acute abdominal pain at the pre-hospital stage and evacuation to healthcare facilities by visiting teams for the period 2014–2015. are presented by the example of the State Institution «Engels Emergency Medical Setting». Results. Difficulties in routing children to the necessary healthcare facilities (surgical or somatic are due to the complexities of differential diagnosis of the disease in children with acute abdominal pain at the pre-hospital stage. Conclusion. The main task of the primary care and emergency physician at the pre-hospital stage, whose decision determines the direction of the diagnostic search, timeliness and adequacy of the subsequent treatment measures, is to give a correct assessment of abdominal pain syndrome. 

  17. Stochastic Averaging of Strongly Nonlinear Oscillators under Poisson White Noise Excitation

    Science.gov (United States)

    Zeng, Y.; Zhu, W. Q.

    A stochastic averaging method for single-degree-of-freedom (SDOF) strongly nonlinear oscillators under Poisson white noise excitation is proposed by using the so-called generalized harmonic functions. The stationary averaged generalized Fokker-Planck-Kolmogorov (GFPK) equation is solved by using the classical perturbation method. Then the procedure is applied to estimate the stationary probability density of response of a Duffing-van der Pol oscillator under Poisson white noise excitation. Theoretical results agree well with Monte Carlo simulations.

  18. MR selective flow-tracking cartography: a postprocessing procedure applied to four-dimensional flow MR imaging for complete characterization of cranial dural arteriovenous fistulas.

    Science.gov (United States)

    Edjlali, Myriam; Roca, Pauline; Rabrait, Cécile; Trystram, Denis; Rodriguez-Régent, Christine; Johnson, Kevin M; Wieben, Oliver; Turski, Patrick; Meder, Jean-François; Naggara, Olivier; Oppenheim, Catherine

    2014-01-01

    To assess the feasibility of a selective flow-tracking cartographic procedure applied to four-dimensional (4D) flow imaging and to demonstrate its usefulness in the characterization of dural arteriovenous fistulas (DAVFs). Institutional review board approval was obtained, and all patients provided written informed consent. Eight patients (nine DAVFs) underwent 3.0-T magnetic resonance (MR) imaging and digital subtraction angiography (DSA). Imaging examinations were performed within 24 hours of each other. 4D flow MR imaging was performed by using a 4D radial phase-contrast vastly undersampled isotropic projection reconstruction pulse sequence with an isotropic spatial resolution of 0.86 mm (5 minutes 35 seconds). Two radiologists independently reviewed images from MR flow-tracking cartography and reported the location of arterial feeder vessels and the venous drainage type and classified DAVFs according to the risk of rupture (Cognard classification). These results were compared with those at DSA. Quadratic weighted κ statistics with their 95% confidence intervals (CIs) were used to test intermodality agreement in the identification of arterial feeder vessels, draining veins, and Cognard classification. Interreader agreement for shunt location on MR images was perfect (κ = 1), with good-to-excellent interreader agreement for arterial feeder vessel identification (κ = 0.97; 95% CI = 0.92, 1.0), and matched in all cases with shunt location defined at DSA. There was good-to-excellent agreement between MR cartography and DSA in the definition of the main feeding arteries (κ = 0.92; 95% CI = 0.83, 1.0), presence of retrograde flow in dural sinuses (κ = 1), presence of retrograde cortical venous drainage (κ = 1), presence of venous ectasia (κ = 1), and final Cognard classification of DAVFs (κ = 1, standard error = 0.35). MR selective flow-tracking cartography enabled the noninvasive characterization of cranial DAVFs. © RSNA, 2013.

  19. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  20. A novel approach for the averaging of magnetocardiographically recorded heart beats

    Energy Technology Data Exchange (ETDEWEB)

    DiPietroPaolo, D [Advanced Technologies Biomagnetics, Pescara (Italy); Mueller, H-P [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany); Erne, S N [Division for Biosignals and Imaging Technologies, Central Institute for Biomedical Engineering, Ulm University, D-89069 Ulm (Germany)

    2005-05-21

    Performing signal averaging in an efficient and correct way is indispensable since it is a prerequisite for a broad variety of magnetocardiographic (MCG) analysis methods. One of the most common procedures for performing the signal averaging to increase the signal-to-noise ratio (SNR) in magnetocardiography, as well as in electrocardiography (ECG), is done by means of spatial or temporal techniques. In this paper, an improvement of the temporal averaging method is presented. In order to obtain an accurate signal detection, temporal alignment methods and objective classification criteria are developed. The processing technique based on hierarchical clustering is introduced to take into account the non-stationarity of the noise and, to some extent, the biological variability of the signals reaching the optimum SNR. The method implemented is especially designed to run fast and does not require any interaction from the operator. The averaging procedure described in this work is applied to the averaging of MCG data as an example, but with its intrinsic properties it can also be applied to the averaging of ECG recording, averaging of body-surface-potential mapping (BSPM) and averaging of magnetoencephalographic (MEG) or electroencephalographic (EEG) signals.

  1. Applied Hierarchical Cluster Analysis with Average Linkage Algoritm

    Directory of Open Access Journals (Sweden)

    Cindy Cahyaning Astuti

    2017-11-01

    Full Text Available This research was conducted in Sidoarjo District where source of data used from secondary data contained in the book "Kabupaten Sidoarjo Dalam Angka 2016" .In this research the authors chose 12 variables that can represent sub-district characteristics in Sidoarjo. The variable that represents the characteristics of the sub-district consists of four sectors namely geography, education, agriculture and industry. To determine the equitable geographical conditions, education, agriculture and industry each district, it would require an analysis to classify sub-districts based on the sub-district characteristics. Hierarchical cluster analysis is the analytical techniques used to classify or categorize the object of each case into a relatively homogeneous group expressed as a cluster. The results are expected to provide information about dominant sub-district characteristics and non-dominant sub-district characteristics in four sectors based on the results of the cluster is formed.

  2. Quality Control Procedures Applied to the CMS Muon Chambers Built at CIEMAT; Procedimientos de Control de Calildad de las Camaras de Muones del Experimento CMS Construidas en el CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Fouz, M. C.; Puerta Pelayo, J.

    2004-07-01

    In this document the quality control procedures applied to the CMS muon drift chmabers built at CIEMAT are described. It includes a description of the high voltage and front electronics associated to the chambers. Every procedure is described with detail and a list of the more common problems and possible solutions is given. This document can be considered as a chambert test handbook for beginners. (Author) 3 refs.

  3. Quantum Averaging of Squeezed States of Light

    DEFF Research Database (Denmark)

    Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...

  4. An Integrated Experimental-Modelling Procedure Applied to the Design of a Field Scale Goethite Nanoparticle Injection for the Remediation of Contaminated Sites

    Science.gov (United States)

    Bianco, C.; Tosco, T.; Sethi, R.

    2017-12-01

    Nanoremediation is a promising in-situ technology for the reclamation of contaminated aquifers. It consists in the subsurface injection of a reactive colloidal suspension for the in-situ treatment of pollutants. The overall success of this technology at the field scale is strictly related to the achievement of an effective and efficient emplacement of the nanoparticles (NP) inside the contaminated area. Mathematical models can be used to support the design of nanotechnology-based remediation by effectively assessing the expected NP mobility at the field scale. Several analytical and numerical tools have been developed in recent years to model the transport of NPs in simplified geometry and boundary conditions. The numerical tool MNMs was developed by the authors of this work to simulate colloidal transport in 1D Cartesian and radial coordinates. A new modelling tool, MNM3D (Micro and Nanoparticle transport Model in 3D geometries), was also proposed for the simulation of injection and transport of NP suspensions in generic complex scenarios. MNM3D accounts for the simultaneous dependency of NP transport on water ionic strength and velocity. The software was developed to predict the NP mobility at different stages of a nanoremediation application, from the design stage to the prediction of the long-term fate after injection. In this work an integrated experimental-modelling procedure is applied to support the design of a field scale injection of goethite NPs carried out in the framework of the H2020 European project Reground. Column tests are performed at different injection flowrates using natural sand collected at the contaminated site as porous medium. The tests are interpreted using MNMs to characterize the NP mobility and derive the constitutive equations describing the suspension behavior in the natural porous medium. MNM3D is then used to predict NP behavior during the field scale injection and to assess the long-term mobility of the injected slurry. Finally

  5. Correctional Facility Average Daily Population

    Data.gov (United States)

    Montgomery County of Maryland — This dataset contains Accumulated monthly with details from Pre-Trial Average daily caseload * Detention Services, Average daily population for MCCF, MCDC, PRRS and...

  6. 40 CFR Appendix B to Part 76 - Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers

    Science.gov (United States)

    2010-07-01

    ... Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers B Appendix B to Part 76 Protection of... of Nitrogen Oxides Controls Applied to Group 1, Boilers 1. Purpose and Applicability This technical... section 407 of the Act).” In developing the allowable NOX emissions limitations for Group 2 boilers...

  7. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  8. Microscale Procedure for Inorganic Qualitative Analysis with Emphasis on Writing Equations: Chemical Fingerprinting Applied to the "n"-bottle Problem of Matching Samples with Their Formulas

    Science.gov (United States)

    Sattsangi, Prem D.

    2014-01-01

    A laboratory method for teaching inorganic qualitative analysis and chemical equations is described. The experiment has been designed to focus attention on cations and anions that react to form products. This leads to a logical approach to understand and write chemical equations. The procedure uses 3 mL plastic micropipettes to store and deliver…

  9. Fixed Average Spectra of Orchestral Instrument Tones

    Directory of Open Access Journals (Sweden)

    Joseph Plazak

    2010-04-01

    Full Text Available The fixed spectrum for an average orchestral instrument tone is presented based on spectral data from the Sandell Harmonic Archive (SHARC. This database contains non-time-variant spectral analyses for 1,338 recorded instrument tones from 23 Western instruments ranging from contrabassoon to piccolo. From these spectral analyses, a grand average was calculated, providing what might be considered an average non-time-variant harmonic spectrum. Each of these tones represents the average of all instruments in the SHARC database capable of producing that pitch. These latter tones better represent common spectral changes with respect to pitch register, and might be regarded as an “average instrument.” Although several caveats apply, an average harmonic tone or instrument may prove useful in analytic and modeling studies. In addition, for perceptual experiments in which non-time-variant stimuli are needed, an average harmonic spectrum may prove to be more ecologically appropriate than common technical waveforms, such as sine tones or pulse trains. Synthesized average tones are available via the web.

  10. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  11. Office of Inspector General report on Naval Petroleum Reserve Number 1, independent accountant`s report on applying agreed-upon procedures

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    On October 6, 1997, the Department of Energy (DOE) announced it had agreed to sell all of the Government`s interest in Naval Petroleum Reserve Number 1 (NPR-1) to Occidental Petroleum Corporation for $3.65 billion. This report presents the results of the independent certified public accountants` agreed-upon procedures work on the Preliminary Settlement Statement of the Purchase and Sale Agreement between DOE and Occidental. To fulfill their responsibilities, the Office of Inspector General contracted with the independent public accounting firm of KPMG Peat Marwick LLP to conduct the work for them, subject to their review. The work was done in accordance with the Statements on Standards for Attestation Engagements issued by the American Institute of Certified Public Accountants. As such, the independent certified public accountants performed only work that was agreed upon by DOE and Occidental. This report is intended solely for the use of DOE and Occidental and should not be used by those who have not agreed to the procedures and taken responsibility for the sufficiency of the procedures for their purposes. However, this report is a matter of public record, and its distribution is not limited. The independent certified public accountants identified over 20 adjustments to the Preliminary Settlement Statement that would result in a $10.8 million increase in the sale price.

  12. Small scale magnetic flux-averaged magnetohydrodynamics

    International Nuclear Information System (INIS)

    Pfirsch, D.; Sudan, R.N.

    1994-01-01

    By relaxing exact magnetic flux conservation below a scale λ a system of flux-averaged magnetohydrodynamic equations are derived from Hamilton's principle with modified constraints. An energy principle can be derived from the linearized averaged system because the total system energy is conserved. This energy principle is employed to treat the resistive tearing instability and the exact growth rate is recovered when λ is identified with the resistive skin depth. A necessary and sufficient stability criteria of the tearing instability with line tying at the ends for solar coronal loops is also obtained. The method is extended to both spatial and temporal averaging in Hamilton's principle. The resulting system of equations not only allows flux reconnection but introduces irreversibility for appropriate choice of the averaging function. Except for boundary contributions which are modified by the time averaging process total energy and momentum are conserved over times much longer than the averaging time τ but not for less than τ. These modified boundary contributions correspond to the existence, also, of damped waves and shock waves in this theory. Time and space averaging is applied to electron magnetohydrodynamics and in one-dimensional geometry predicts solitons and shocks in different limits

  13. Symmetric Euler orientation representations for orientational averaging.

    Science.gov (United States)

    Mayerhöfer, Thomas G

    2005-09-01

    A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.

  14. Methodology of Isochronal and Isothermal Anneals applied to Irradiated MOS Structures. Application to Post-Irradiation Effects (in Space, Accelerators) and Standard Test Procedures

    International Nuclear Information System (INIS)

    Chabrerie, Christian

    1997-01-01

    We report the development of a methodology using isochronal and isothermal anneals for the characterization of MOS (Metal - Oxide - Semiconductor) Transistors irradiated electronic components. We study the recovery kinetics of the post-irradiation effects and the modeling of the recovery temperature activated phenomena. This allows us to understand the basic physical mechanisms that have led to the definition of standard test procedures. The fields of application are numerous (space, military, accelerators for high energy physics, civilian nuclear and harsh environment robotics). We begin by outlining the context of our study and by presenting the actual standard test procedures (TM1019.4 and BS22900) used for the qualification of integrated circuits. We then review the different theories of the temperature activated phenomena. The link between the foundations of the normalized procedures and the thermally activated phenomena is clarified. From this analysis, we propose a new approach, mainly based on the use of isochronal anneals. During this work, we have developed two tools with this aim: - the first tool is software, it is a numerical simulation program for thermally activated phenomena. - The second is composed of a specific automated annealing bench (in particular isochronal), that we have designed. The applications and results are then presented in four parts: - the first presents simulation results computed using our calculation code, - the second concerns experimental results obtained with thin oxides from different rad-hard technologies and their application to study gate oxides of transistors, - the third develops results on non-hardened technological thick oxides and their consequences on the lateral leakage currents due to parasitic MOS structures in the 'commercial' components, - the fourth concerns the post-irradiation evolution of interface states during isochronal anneals. We conclude with a number of recommendations concerning the post

  15. Lagrangian averaging with geodesic mean

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler-α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  16. Applying 'Evidence-Based Medicine' Theory to Interventional Radiology.Part 2: A Spreadsheet for Swift Assessment of Procedural Benefit and Harm

    International Nuclear Information System (INIS)

    MacEneaney, Peter M.; Malone, Dermot E.

    2000-01-01

    AIM: To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. MATERIALS AND METHODS: Microsoft Excel TM was used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit -- relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm -- relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. RESULTS: This spreadsheet can be used on desktop and palmtop computers. The MS Excel TM version can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. CONCLUSION: A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures. MacEneaney, P.M. and Malone, D.E

  17. Convergence of multiple ergodic averages

    OpenAIRE

    Host, Bernard

    2006-01-01

    These notes are based on a course for a general audience given at the Centro de Modeliamento Matem\\'atico of the University of Chile, in December 2004. We study the mean convergence of multiple ergodic averages, that is, averages of a product of functions taken at different times. We also describe the relations between this area of ergodic theory and some classical and some recent results in additive number theory.

  18. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  19. Cumulative and Averaging Fission of Beliefs

    OpenAIRE

    Josang, Audun

    2007-01-01

    Belief fusion is the principle of combining separate beliefs or bodies of evidence originating from different sources. Depending on the situation to be modelled, different belief fusion methods can be applied. Cumulative and averaging belief fusion is defined for fusing opinions in subjective logic, and for fusing belief functions in general. The principle of fission is the opposite of fusion, namely to eliminate the contribution of a specific belief from an already fused belief, with the pur...

  20. Cryo-Electron Tomography and Subtomogram Averaging.

    Science.gov (United States)

    Wan, W; Briggs, J A G

    2016-01-01

    Cryo-electron tomography (cryo-ET) allows 3D volumes to be reconstructed from a set of 2D projection images of a tilted biological sample. It allows densities to be resolved in 3D that would otherwise overlap in 2D projection images. Cryo-ET can be applied to resolve structural features in complex native environments, such as within the cell. Analogous to single-particle reconstruction in cryo-electron microscopy, structures present in multiple copies within tomograms can be extracted, aligned, and averaged, thus increasing the signal-to-noise ratio and resolution. This reconstruction approach, termed subtomogram averaging, can be used to determine protein structures in situ. It can also be applied to facilitate more conventional 2D image analysis approaches. In this chapter, we provide an introduction to cryo-ET and subtomogram averaging. We describe the overall workflow, including tomographic data collection, preprocessing, tomogram reconstruction, subtomogram alignment and averaging, classification, and postprocessing. We consider theoretical issues and practical considerations for each step in the workflow, along with descriptions of recent methodological advances and remaining limitations. © 2016 Elsevier Inc. All rights reserved.

  1. Implementation of procedures for kilovoltage evaluation applied to dental X ray system; Implementacao de procedimentos para avaliacao de quilovoltagem aplicado em tubos de raios X odontologicos

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Paula S. Sasaki; Potiens, Maria da Penha A. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)]. E-mail: psasaki@ipen.br; mppalbu@ipen.br

    2005-07-01

    In this work measurements were done in order to evaluate the accuracy and the precision of the voltage applied to a X rays tube, as well as its variation with distance. A dental X ray system with nominal voltage of 70 kV was used, and a portable kV digital measurer calibrated by the IEE/USP was also utilized. The kV obtained results presented a variation of 9.7% in accuracy and 1.6% in the precision. The results obtained for the distance variation showed only 0.6% of deviation, considering the kVp values obtained. The results are in accordance with the minimum values recommended by Portaria Federal 453 from the Ministerio da Saude. (author)

  2. A practical CBA-based screening procedure for identification of river basins where the costs of fulfilling the WFD requirements may be disproportionate – applied to the case of Denmark

    DEFF Research Database (Denmark)

    Jensen, Carsten Lynge; Jacobsen, Brian H.; Olsen, Søren Bøye

    2013-01-01

    procedure to a total of 23 river basin areas in Denmark where costs and benefits are estimated for each of the areas. The results suggest that costs could be disproportionate in several Danish river basins. The sensitivity analysis further helps to pinpoint two or three basins where we suggest that much......The European Union’s (EU) Water Framework Directive (WFD) is implemented as an instrument to obtain good ecological status in waterbodies of Europe. The directive recognises the need to accommodate social and economic considerations to obtain cost-effective implementation of the directive...... disproportionate costs at the national level. Specifically, we propose to use a screening procedure based on a relatively conservative cost–benefit analysis (CBA) as a first step towards identifying areas where costs could be disproportionate. We provide an empirical example by applying the proposed screening...

  3. Averaging of multivalued differential equations

    Directory of Open Access Journals (Sweden)

    G. Grammel

    2003-04-01

    Full Text Available Nonlinear multivalued differential equations with slow and fast subsystems are considered. Under transitivity conditions on the fast subsystem, the slow subsystem can be approximated by an averaged multivalued differential equation. The approximation in the Hausdorff sense is of order O(ϵ1/3 as ϵ→0.

  4. Fuzzy Weighted Average: Analytical Solution

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2009-01-01

    An algorithm is presented for the computation of analytical expressions for the extremal values of the α-cuts of the fuzzy weighted average, for triangular or trapeizoidal weights and attributes. Also, an algorithm for the computation of the inverses of these expressions is given, providing exact

  5. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  6. Polyhedral Painting with Group Averaging

    Science.gov (United States)

    Farris, Frank A.; Tsao, Ryan

    2016-01-01

    The technique of "group-averaging" produces colorings of a sphere that have the symmetries of various polyhedra. The concepts are accessible at the undergraduate level, without being well-known in typical courses on algebra or geometry. The material makes an excellent discovery project, especially for students with some background in…

  7. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  8. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  9. Operant procedures applied to a conversion disorder

    NARCIS (Netherlands)

    Kop, P.F.M.; Heijden, H.P. van der; Hoogduin, K.A.L.; Schaap, C.P.D.R.

    1995-01-01

    Conversion symptoms, as identified by the existing classification systems, do not form a single unitary class. It is therefore difficult to find an adequate treatment that closely connects to the basic characteristics of the disease. Instead, we designed a treatment method that closely relates to

  10. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  11. Instantaneous, phase-averaged, and time-averaged pressure from particle image velocimetry

    Science.gov (United States)

    de Kat, Roeland

    2015-11-01

    Recent work on pressure determination using velocity data from particle image velocimetry (PIV) resulted in approaches that allow for instantaneous and volumetric pressure determination. However, applying these approaches is not always feasible (e.g. due to resolution, access, or other constraints) or desired. In those cases pressure determination approaches using phase-averaged or time-averaged velocity provide an alternative. To assess the performance of these different pressure determination approaches against one another, they are applied to a single data set and their results are compared with each other and with surface pressure measurements. For this assessment, the data set of a flow around a square cylinder (de Kat & van Oudheusden, 2012, Exp. Fluids 52:1089-1106) is used. RdK is supported by a Leverhulme Trust Early Career Fellowship.

  12. Human factoring administrative procedures

    International Nuclear Information System (INIS)

    Grider, D.A.; Sturdivant, M.H.

    1991-01-01

    In nonnuclear business, administrative procedures bring to mind such mundane topics as filing correspondence and scheduling vacation time. In the nuclear industry, on the other hand, administrative procedures play a vital role in assuring the safe operation of a facility. For some time now, industry focus has been on improving technical procedures. Significant efforts are under way to produce technical procedure requires that a validated technical, regulatory, and administrative basis be developed and that the technical process be established for each procedure. Producing usable technical procedures requires that procedure presentation be engineered to the same human factors principles used in control room design. The vital safety role of administrative procedures requires that they be just as sound, just a rigorously formulated, and documented as technical procedures. Procedure programs at the Tennessee Valley Authority and at Boston Edison's Pilgrim Station demonstrate that human factors engineering techniques can be applied effectively to technical procedures. With a few modifications, those same techniques can be used to produce more effective administrative procedures. Efforts are under way at the US Department of Energy Nuclear Weapons Complex and at some utilities (Boston Edison, for instance) to apply human factors engineering to administrative procedures: The techniques being adapted include the following

  13. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  14. Exact Membership Functions for the Fuzzy Weighted Average

    NARCIS (Netherlands)

    van den Broek, P.M.; Noppen, J.A.R.

    2011-01-01

    The problem of computing the fuzzy weighted average, where both attributes and weights are fuzzy numbers, is well studied in the literature. Generally, the approach is to apply Zadeh’s extension principle to compute α-cuts of the fuzzy weighted average from the α-cuts of the attributes and weights

  15. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  16. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  17. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  18. Site Averaged Neutron Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  19. Site Averaged Gravimetric Soil Moisture: 1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  20. Site Averaged Gravimetric Soil Moisture: 1988 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  1. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  2. Site Averaged Gravimetric Soil Moisture: 1987 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  3. Averaging hydraulic head, pressure head, and gravitational head in subsurface hydrology, and implications for averaged fluxes, and hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    G. H. de Rooij

    2009-07-01

    Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.

  4. Bootstrapping pre-averaged realized volatility under market microstructure noise

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour

    The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...

  5. 40 CFR 63.1332 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... other controls for a Group 1 storage vessel, batch process vent, aggregate batch vent stream, continuous... calculated using the procedures in § 63.1323(b). (B) If the batch process vent is controlled using a control... pollution prevention in generating emissions averaging credits. (1) Storage vessels, batch process vents...

  6. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  7. Ultra-low noise miniaturized neural amplifier with hardware averaging.

    Science.gov (United States)

    Dweiri, Yazan M; Eggers, Thomas; McCallum, Grant; Durand, Dominique M

    2015-08-01

    Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and the high channel count in electrode arrays. This

  8. Ultra-low noise miniaturized neural amplifier with hardware averaging

    Science.gov (United States)

    Dweiri, Yazan M.; Eggers, Thomas; McCallum, Grant; Durand, Dominique M.

    2015-08-01

    Objective. Peripheral nerves carry neural signals that could be used to control hybrid bionic systems. Cuff electrodes provide a robust and stable interface but the recorded signal amplitude is small (noise of less than 1 μVrms for a useful signal-to-noise ratio (SNR). Flat interface nerve electrode (FINE) contacts alone generate thermal noise of at least 0.5 μVrms therefore the amplifier should add as little noise as possible. Since mainstream neural amplifiers have a baseline noise of 2 μVrms or higher, novel designs are required. Approach. Here we apply the concept of hardware averaging to nerve recordings obtained with cuff electrodes. An optimization procedure is developed to minimize noise and power simultaneously. The novel design was based on existing neural amplifiers (Intan Technologies, LLC) and is validated with signals obtained from the FINE in chronic dog experiments. Main results. We showed that hardware averaging leads to a reduction in the total recording noise by a factor of 1/√N or less depending on the source resistance. Chronic recording of physiological activity with FINE using the presented design showed significant improvement on the recorded baseline noise with at least two parallel operation transconductance amplifiers leading to a 46.1% reduction at N = 8. The functionality of these recordings was quantified by the SNR improvement and shown to be significant for N = 3 or more. The present design was shown to be capable of generating noise when connected to a FINE placed on the sciatic nerve of an awake animal. An algorithm was introduced to find the value of N that can minimize both the power consumption and the noise in order to design a miniaturized ultralow-noise neural amplifier. Significance. These results demonstrate the efficacy of hardware averaging on noise improvement for neural recording with cuff electrodes, and can accommodate the presence of high source impedances that are associated with the miniaturized contacts and

  9. Weighted south-wide average pulpwood prices

    Science.gov (United States)

    James E. Granskog; Kevin D. Growther

    1991-01-01

    Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...

  10. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  11. Decision-making Procedures

    DEFF Research Database (Denmark)

    Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander Christopher

    2009-01-01

    It is a persistent finding in psychology and experimental economics that people's behavior is not only shaped by outcomes but also by decision-making procedures. In this paper we develop a general framework capable of modelling these procedural concerns. Within the context of psychological games we...... define procedures as mechanisms that influence the probabilities of reaching different endnodes. We show that for such procedural games a sequential psychological equilibrium always exists. Applying this approach within a principal-agent context we show that the way less attractive jobs are allocated...

  12. Averaging in cosmological models using scalars

    International Nuclear Information System (INIS)

    Coley, A A

    2010-01-01

    The averaging problem in cosmology is of considerable importance for the correct interpretation of cosmological data. A rigorous mathematical definition of averaging in a cosmological model is necessary. In general, a spacetime is completely characterized by its scalar curvature invariants, and this suggests a particular spacetime averaging scheme based entirely on scalars. We clearly identify the problems of averaging in a cosmological model. We then present a precise definition of a cosmological model, and based upon this definition, we propose an averaging scheme in terms of scalar curvature invariants. This scheme is illustrated in a simple static spherically symmetric perfect fluid cosmological spacetime, where the averaging scales are clearly identified.

  13. Average monthly and annual climate maps for Bolivia

    KAUST Repository

    Vicente-Serrano, Sergio M.

    2015-02-24

    This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.

  14. Super convergence of ergodic averages for quasiperiodic orbits

    Science.gov (United States)

    Das, Suddhasattwa; Yorke, James A.

    2018-02-01

    The Birkhoff ergodic theorem asserts that time averages of a function evaluated along a trajectory of length N converge to the space average, the integral of f, as N\\to∞ , for ergodic dynamical systems. But that convergence can be slow. Instead of uniform averages that assign equal weights to points along the trajectory, we use an average with a non-uniform distribution of weights, weighting the early and late points of the trajectory much less than those near the midpoint N/2 . We show that in quasiperiodic dynamical systems, our weighted averages converge far faster provided f is sufficiently differentiable. This result can be applied to obtain efficient numerical computation of rotation numbers, invariant densities and conjugacies of quasiperiodic systems.

  15. Time average vibration fringe analysis using Hilbert transformation

    International Nuclear Information System (INIS)

    Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad

    2010-01-01

    Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.

  16. Spatial averaging of a dissipative particle dynamics model for active suspensions

    Science.gov (United States)

    Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot

    2018-03-01

    Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.

  17. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  18. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  19. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  20. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  1. Cosmological ensemble and directional averages of observables

    CERN Document Server

    Bonvin, Camille; Durrer, Ruth; Maartens, Roy; Umeh, Obinna

    2015-01-01

    We show that at second order ensemble averages of observables and directional averages do not commute due to gravitational lensing. In principle this non-commutativity is significant for a variety of quantities we often use as observables. We derive the relation between the ensemble average and the directional average of an observable, at second-order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmological observables, focussing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance is increased by gravitational lensing, whereas the directional average of the distance is decreased. We show that for a generic observable, there exists a particular function of the observable that is invariant under second-order lensing perturbations.

  2. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 87; Issue 1. Significance of power average of ... Additional sinusoidal and different non-sinusoidal periodic perturbations applied to the periodically forced nonlinear oscillators decide the maintainance or inhibitance of chaos. It is observed that the weak amplitude of ...

  3. 40 CFR 80.67 - Compliance on average.

    Science.gov (United States)

    2010-07-01

    ... of this section apply to all reformulated gasoline and RBOB produced or imported for which compliance... use to ensure the gasoline is produced by the refiner or is imported by the importer and is used only... on average. (1) The VOC-controlled reformulated gasoline and RBOB produced at any refinery or...

  4. The background effective average action approach to quantum gravity

    DEFF Research Database (Denmark)

    D’Odorico, G.; Codello, A.; Pagani, C.

    2016-01-01

    of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....

  5. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  6. Applied Macroeconomics

    NARCIS (Netherlands)

    Heijman, W.J.M.

    2000-01-01

    This book contains a course in applied macroeconomics. Macroeconomic theory is applied to real world cases. Students are expected to compute model results with the help of a spreadsheet program. To that end the book also contains descriptions of the spreadsheet applications used, such as linear

  7. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  8. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  9. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  10. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  11. Civil Procedure.

    Science.gov (United States)

    Byer, Robert

    1997-01-01

    Briefly reviews the historical development of civil procedure (the rules that dictate how a civil case can proceed through the courts) and identifies some of its main components. Discusses procedures such as subject matter jurisdiction, personal jurisdiction, venue, discovery, motions practice, pleadings, pretrial conference, and trials. (MJP)

  12. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  13. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  14. Quantization Procedures; Sistemas de cuantificacion

    Energy Technology Data Exchange (ETDEWEB)

    Cabrera, J. A.; Martin, R.

    1976-07-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs.

  15. Average action for models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.; Wetterich, C.

    1993-01-01

    The average action is a new tool for investigating spontaneous symmetry breaking in elementary particle theory and statistical mechanics beyond the validity of standard perturbation theory. The aim of this work is to provide techniques for an investigation of models with fermions and scalars by means of the average potential. In the phase with spontaneous symmetry breaking, the inner region of the average potential becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations in this region necessitate a calculation of the fermion determinant in a spin wave background. We also compute the fermionic contribution to the wave function renormalization in the scalar kinetic term. (orig.)

  16. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  17. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  18. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    Science.gov (United States)

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  19. Photodigitizing procedures

    Science.gov (United States)

    Kilgore, P. D.; Gottbrath, J. H.

    1984-02-01

    This report documents procedures and programs for efficiently running the Photo Digitizing System at the Naval Biodynamics Laboratory. Procedures have been tested and have been found to be effective. Any future acquisitions of programs or changes to current programs should be incorporated in these procedures. On-going research programs use high speed instrumentation cameras to record the motion of test subjects during biodynamic experiments. The films are digitized and the 3-dimensional motion is reconstructed and analyzed. Experimental research is performed to determine the effects of aircraft crashes, ship motion, vibration, aircraft ejection and parachute opening forces on the health and performance of Navy personnel.

  20. Vehicle target detection method based on the average optical flow

    Science.gov (United States)

    Ma, J. Y.; Jie, F. R.; Hu, Y. J.

    2017-07-01

    Moving target detection in image sequence for dynamic scene is an important research topic in the field of computer vision. Block projection and matching are utilized for global motion estimation. Then, the background image is compensated applying the estimated motion parameters so as to stabilize the image sequence. Consequently, background subtraction is employed in the stabilized image sequence to extract moving targets. Finally, divide the difference image into uniform grids and average optical flow is employed for motion analysis. Experiment tests show that the proposed average optical flow method can efficiently extract the vehicle targets from dynamic scene meanwhile decreasing the false alarm.

  1. Average-passage flow model development

    Science.gov (United States)

    Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark

    1989-01-01

    A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.

  2. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  3. Cosmic inhomogeneities and averaged cosmological dynamics.

    Science.gov (United States)

    Paranjape, Aseem; Singh, T P

    2008-10-31

    If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.

  4. Oculoplastic procedures

    Science.gov (United States)

    ... procedures may be done on the: Eyelids Eye sockets Eyebrows Cheeks Tear ducts Face or forehead These ... eyes. These lenses help protect your eyes and shield them from the bright lights of the surgical ...

  5. Monthly snow/ice averages (ISCCP)

    Data.gov (United States)

    National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...

  6. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  7. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  8. Schedule of average annual equipment ownership expense

    Science.gov (United States)

    2003-03-06

    The "Schedule of Average Annual Equipment Ownership Expense" is designed for use on Force Account bills of Contractors performing work for the Illinois Department of Transportation and local government agencies who choose to adopt these rates. This s...

  9. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  10. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  11. Aplikasi Moving Average Filter Pada Teknologi Enkripsi

    OpenAIRE

    Hermawi, Adrianto

    2007-01-01

    A method of encrypting and decrypting is introduced. The type of information experimented on is a mono wave sound file with frequency 44 KHZ. The encryption technology uses a regular noise wave sound file (with equal frequency) and moving average filter to decrypt and obtain the original signal. All experiments are programmed using MATLAB. By the end of the experiment the author concludes that the Moving Average Filter can indeed be used as an alternative to encryption technology.

  12. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  13. Applied Macroeconometrics

    OpenAIRE

    Nektarios Aslanidis

    2017-01-01

    This book treats econometric methods for analysis of applied econometrics with a particular focus on applications in macroeconomics. Topics include macroeconomic data, panel data models, unobserved heterogeneity, model comparison, endogeneity, dynamic econometric models, vector autoregressions, forecast evaluation, structural identification. The books provides undergraduate students with the necessary knowledge to be able to undertake econometric analysis in modern macroeconomic research.

  14. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  15. Averaging approximation to singularly perturbed nonlinear stochastic wave equations

    Science.gov (United States)

    Lv, Yan; Roberts, A. J.

    2012-06-01

    An averaging method is applied to derive effective approximation to a singularly perturbed nonlinear stochastic damped wave equation. Small parameter ν > 0 characterizes the singular perturbation, and να, 0 ⩽ α ⩽ 1/2, parametrizes the strength of the noise. Some scaling transformations and the martingale representation theorem yield the effective approximation, a stochastic nonlinear heat equation, for small ν in the sense of distribution.

  16. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    . Alternatively, for time stationary and homogeneous turbulence, analytical expressions, involving higher order correlation functions R(n)(r, t) = , can be derived for the conditional averages. These expressions have the form of series expansions, which have...... to be truncated for practical applications. The convergence properties of these series are not known, except in the limit of Gaussian statistics. By applying the analysis to numerically simulated ion acoustic turbulence, we demonstrate that by keeping two or three terms in these series an acceptable approximation...

  17. A fiber orientation-adapted integration scheme for computing the hyperelastic Tucker average for short fiber reinforced composites

    Science.gov (United States)

    Goldberg, Niels; Ospald, Felix; Schneider, Matti

    2017-10-01

    In this article we introduce a fiber orientation-adapted integration scheme for Tucker's orientation averaging procedure applied to non-linear material laws, based on angular central Gaussian fiber orientation distributions. This method is stable w.r.t. fiber orientations degenerating into planar states and enables the construction of orthotropic hyperelastic energies for truly orthotropic fiber orientation states. We establish a reference scenario for fitting the Tucker average of a transversely isotropic hyperelastic energy, corresponding to a uni-directional fiber orientation, to microstructural simulations, obtained by FFT-based computational homogenization of neo-Hookean constituents. We carefully discuss ideas for accelerating the identification process, leading to a tremendous speed-up compared to a naive approach. The resulting hyperelastic material map turns out to be surprisingly accurate, simple to integrate in commercial finite element codes and fast in its execution. We demonstrate the capabilities of the extracted model by a finite element analysis of a fiber reinforced chain link.

  18. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  19. Applied optics

    International Nuclear Information System (INIS)

    Orszag, A.; Antonetti, A.

    1988-01-01

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed [fr

  20. Microchannel heatsinks for high-average-power laser diode arrays

    Science.gov (United States)

    Benett, William J.; Freitas, Barry L.; Beach, Raymond J.; Ciarlo, Dino R.; Sperry, Verry; Comaskey, Brian J.; Emanuel, Mark A.; Solarz, Richard W.; Mundinger, David C.

    1992-06-01

    Detailed performance results and fabrication techniques for an efficient and low thermal impedance laser diode array heatsink are presented. High duty factor or even CW operation of fully filled laser diode arrays is enabled at high average power. Low thermal impedance is achieved using a liquid coolant and laminar flow through microchannels. The microchannels are fabricated in silicon using a photolithographic pattern definition procedure followed by anisotropic chemical etching. A modular rack-and-stack architecture is adopted for the heatsink design allowing arbitrarily large two-dimensional arrays to be fabricated and easily maintained. The excellent thermal control of the microchannel cooled heatsinks is ideally suited to pump array requirements for high average power crystalline lasers because of the stringent temperature demands that result from coupling the diode light to several nanometers wide absorption features characteristic of lasing ions in crystals.

  1. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  2. Backus and Wyllie Averages for Seismic Attenuation

    Science.gov (United States)

    Qadrouh, Ayman N.; Carcione, José M.; Ba, Jing; Gei, Davide; Salim, Ahmed M.

    2018-01-01

    Backus and Wyllie equations are used to obtain average seismic velocities at zero and infinite frequencies, respectively. Here, these equations are generalized to obtain averages of the seismic quality factor (inversely proportional to attenuation). The results indicate that the Wyllie velocity is higher than the corresponding Backus quantity, as expected, since the ray velocity is a high-frequency limit. On the other hand, the Wyllie quality factor is higher than the Backus one, following the velocity trend, i.e., the higher the velocity (the stiffer the medium), the higher the attenuation. Since the quality factor can be related to properties such as porosity, permeability, and fluid viscosity, these averages can be useful for evaluating reservoir properties.

  3. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  4. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  5. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  6. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION

    Directory of Open Access Journals (Sweden)

    A. A. Listvin

    2017-01-01

    of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.

  9. Environmental procedures

    International Nuclear Information System (INIS)

    1992-01-01

    The European Bank has pledged in its Agreement to place environmental management at the forefront of its operations to promote sustainable economic development in central and eastern Europe. The Bank's environmental policy is set out in the document titled, Environmental Management: The Bank's Policy Approach. This document, Environmental Procedures, presents the procedures which the European Bank has adopted to implement this policy approach with respect to its operations. The environmental procedures aim to: ensure that throughout the project approval process, those in positions of responsibility for approving projects are aware of the environmental implications of the project, and can take these into account when making decisions; avoid potential liabilities that could undermine the success of a project for its sponsors and the Bank; ensure that environmental costs are estimated along with other costs and liabilities; and identify opportunities for environmental enhancement associated with projects. The review of environmental aspects of projects is conducted by many Bank staff members throughout the project's life. This document defines the responsibilities of the people and groups involved in implementing the environmental procedures. Annexes contain Environmental Management: The Bank's Policy Approach, examples of environmental documentation for the project file and other ancillary information

  10. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  11. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  12. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating ...

  13. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...

  14. Average Transverse Momentum Quantities Approaching the Lightfront

    NARCIS (Netherlands)

    Boer, Daniel

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the p (T) broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large

  15. Averages of operators in finite Fermion systems

    International Nuclear Information System (INIS)

    Ginocchio, J.N.

    1980-01-01

    The important ingredients in the spectral analysis of Fermion systems are the average of operators. In this paper we shall derive expressions for averages of operators in truncated Fermion spaces in terms of the minimal information needed about the operator. If we take the operator to be powers of the Hamiltonian we can then study the conditions on a Hamiltonian for the eigenvalues of the Hamiltonian in the truncated space to be Gaussian distributed. The theory of scalar traces is reviewed, and the dependence on nucleon number and single-particle states is reviewed. These results are used to show that a dilute non-interacting system will have Gaussian distributed eigenvalues, i.e., its cumulants will tend to zero, for a large number of Fermions. The dominant terms in the cumulants of a dilute interacting Fermion system are derived. In this case the cumulants depend crucially on the interaction even for a large number of Fermions. Configuration averaging is briefly discussed. Finally, comments are made on averaging for a fixed number of Fermions and angular momentum

  16. Full averaging of fuzzy impulsive differential inclusions

    Directory of Open Access Journals (Sweden)

    Natalia V. Skripnik

    2010-09-01

    Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.

  17. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  18. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  19. High Average Power Optical FEL Amplifiers

    CERN Document Server

    Ben-Zvi, I; Litvinenko, V

    2005-01-01

    Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...

  20. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  1. Factoring humans into procedures

    International Nuclear Information System (INIS)

    Luna, S.F.; Sturdivant, M.H.; McKay, R.C.

    1988-01-01

    INPO statistics on reported events in nuclear power plants rank deficient procedures as the largest single cause of human performance errors. Human factors principles used to improve the effectiveness of the control room operator can also improve the usability of written procedures. This human factors approach treats each page or complement of pages as a display. Four techniques for applying this approach are reviewed in this paper: (1) presenting information in small blocks (or fields), (2) presenting information consistently, (3) using the mental templates of the performer, and (4) matching the physical features of the plant. A final section offers examples in which combinations of these techniques are used

  2. 40 CFR 600.510-86 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  3. 40 CFR 600.510-93 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  4. 40 CFR 600.510-08 - Calculation of average fuel economy.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Calculation of average fuel economy...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy... Trucks and Passenger Automobiles)-Procedures for Determining Manufacturer's Average Fuel Economy § 600...

  5. Enhancing Trust in the Smart Grid by Applying a Modified Exponentially Weighted Averages Algorithm

    Science.gov (United States)

    2012-06-01

    in Australia, releasing more than 200,000 gallons of sewage into parks, rivers and the grounds of a Hyatt hotel [25] 12 • In 2001, hackers hacked CAL...applications”. IEEE Communications Surveys & Tutorials , 3(4):2–16, 2000. [28] Grimes, M. “SCADA exposed”. Proc. ToorCon, 7, 2005. [29] Hamel, L

  6. Behavioral Implications of Shortlisting Procedures

    OpenAIRE

    Christopher J. Tyson

    2012-01-01

    We consider two-stage "shortlisting procedures" in which the menu of alternatives is first pruned by some process or criterion and then a binary relation is maximized. Given a particular first-stage process, our main result supplies a necessary and sufficient condition for choice data to be consistent with a procedure in the designated class. This result applies to any class of procedures with a certain lattice structure, including the cases of "consideration filters," "satisficing with salie...

  7. Proof Rules for Recursive Procedures

    NARCIS (Netherlands)

    Hesselink, Wim H.

    1993-01-01

    Four proof rules for recursive procedures in a Pascal-like language are presented. The main rule deals with total correctness and is based on results of Gries and Martin. The rule is easier to apply than Martin's. It is introduced as an extension of a specification format for Pascal-procedures, with

  8. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  9. Radiochemical procedures

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1982-01-01

    The modern counting instrumentation has largely obviated the need for separation processes in the radiochemical analysis but problems in low-level radioactivity measurement, environmental-type analyses, and special situations caused in the last years a renaissance of the need for separation techniques. Most of the radiochemical procedures, based on the classic works of the Manhattan Project chemists of the 1940's, were published in the National Nuclear Energy Series (NNES). Improvements such as new solvent extraction and ion exchange separations have been added to these methods throughout the years. Recently the Los Alamos Group have reissued their collected Radiochemical Procedures containing a short summary and review of basic inorganic chemistry - 'Chemistry of the Elements on the Basis of Electronic Configuration'. (A.L.)

  10. THE ASSESSMENT OF CORPORATE BONDS ON THE BASIS OF THE WEIGHTED AVERAGE

    Directory of Open Access Journals (Sweden)

    Victor V. Prokhorov

    2014-01-01

    Full Text Available The article considers the problem associated with the assessment of the interest rate of a public corporate bond issue. The theme of research is the study of techniques for evaluationof interest rates of corporate bond. The article discusses the task of developing a methodology for assessing the marketinterest rate of corporate bonded loan, which allows to takeinto account the systematic and specific risks. The technique of evaluation of market interest rates of corporate bonds onthe basis of weighted averages is proposed. This procedure uses in the calculation of cumulative barrier interest rate, sectoral weighted average interest rate and the interest ratedetermined on the basis of the model CAPM (Capital Asset Pricing Model. The results, which enable to speak about the possibility of applying the proposed methodology for assessing the market interest rate of a public corporate bond issuein the Russian conditions. The results may be applicable for Russian industrial enterprises, organizing issue public bonds,as well as investment companies exposed organizers of corporate securities loans and other organizations specializingin investments in the Russian public corporate bond loans.

  11. Averaging theorems in finite deformation plasticity

    CERN Document Server

    Nemat-Nasser, S C

    1999-01-01

    The transition from micro- to macro-variables of a representative volume element (RVE) of a finitely deformed aggregate (e.g., a composite or a polycrystal) is explored. A number of exact fundamental results on averaging techniques, $9 valid at finite deformations and rotations of any arbitrary heterogeneous continuum, are obtained. These results depend on the choice of suitable kinematic and dynamic variables. For finite deformations, the deformation gradient and $9 its rate, and the nominal stress and its rate, are optimally suited for the averaging purposes. A set of exact identities is presented in terms of these variables. An exact method for homogenization of an ellipsoidal inclusion in an $9 unbounded finitely deformed homogeneous solid is presented, generalizing Eshelby's method for application to finite deformation problems. In terms of the nominal stress rate and the rate of change of the deformation gradient, $9 measured relative to any arbitrary state, a general phase-transformation problem is con...

  12. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  13. Average Annual Rainfall over the Globe

    Science.gov (United States)

    Agrawal, D. C.

    2013-01-01

    The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…

  14. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS.

    Energy Technology Data Exchange (ETDEWEB)

    BEN-ZVI, ILAN, DAYRAN, D.; LITVINENKO, V.

    2005-08-21

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department.

  15. Technological progress and average job matching quality

    OpenAIRE

    Centeno, Mário; Corrêa, Márcio V.

    2009-01-01

    Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998) and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by ...

  16. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  17. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  18. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  19. Implementation and application of moving average as continuous analytical quality control instrument demonstrated for 24 routine chemistry assays.

    Science.gov (United States)

    Rossum, Huub H van; Kemperman, Hans

    2017-07-26

    General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.

  20. Applying radiation

    International Nuclear Information System (INIS)

    Mallozzi, P.J.; Epstein, H.M.; Jung, R.G.; Applebaum, D.C.; Fairand, B.P.; Gallagher, W.J.; Uecker, R.L.; Muckerheide, M.C.

    1979-01-01

    The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction

  1. Kumaraswamy autoregressive moving average models for double bounded environmental data

    Science.gov (United States)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  2. Applying industrial engineering practices to radiology.

    Science.gov (United States)

    Rosen, Len

    2004-01-01

    Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to

  3. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements.

    Science.gov (United States)

    Hourdakis, C J

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, Ū(P), the average, Ū, the effective, U(eff) or the maximum peak, U(P) tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average Ū or the average peak, Ū(p) voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k(PPV,kVp) and the average k(PPV,Uav) conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated-according to the proposed method-PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from Ū(p) and Ū measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  4. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  5. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  6. Average local ionization energy: A review.

    Science.gov (United States)

    Politzer, Peter; Murray, Jane S; Bulat, Felipe A

    2010-11-01

    The average local ionization energy I(r) is the energy necessary to remove an electron from the point r in the space of a system. Its lowest values reveal the locations of the least tightly-held electrons, and thus the favored sites for reaction with electrophiles or radicals. In this paper, we review the definition of I(r) and some of its key properties. Apart from its relevance to reactive behavior, I(r) has an important role in several fundamental areas, including atomic shell structure, electronegativity and local polarizability and hardness. All of these aspects of I(r) are discussed.

  7. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  8. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  9. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  10. The Effects of Cooperative Learning and Learner Control on High- and Average-Ability Students.

    Science.gov (United States)

    Hooper, Simon; And Others

    1993-01-01

    Describes a study that examined the effects of cooperative versus individual computer-based instruction on the performance of high- and average-ability fourth-grade students. Effects of learner and program control are investigated; student attitudes toward instructional content, learning in groups, and partners are discussed; and further research…

  11. Three-dimensional average-shape atlas of the honeybee brain and its applications.

    Science.gov (United States)

    Brandt, Robert; Rohlfing, Torsten; Rybak, Jürgen; Krofczik, Sabine; Maye, Alexander; Westerhoff, Malte; Hege, Hans-Christian; Menzel, Randolf

    2005-11-07

    The anatomical substrates of neural nets are usually composed from reconstructions of neurons that were stained in different preparations. Realistic models of the structural relationships between neurons require a common framework. Here we present 3-D reconstructions of single projection neurons (PN) connecting the antennal lobe (AL) with the mushroom body (MB) and lateral horn, groups of intrinsic mushroom body neurons (type 5 Kenyon cells), and a single mushroom body extrinsic neuron (PE1), aiming to compose components of the olfactory pathway in the honeybee. To do so, we constructed a digital standard atlas of the bee brain. The standard atlas was created as an average-shape atlas of 22 neuropils, calculated from 20 individual immunostained whole-mount bee brains. After correction for global size and positioning differences by repeatedly applying an intensity-based nonrigid registration algorithm, a sequence of average label images was created. The results were qualitatively evaluated by generating average gray-value images corresponding to the average label images and judging the level of detail within the labeled regions. We found that the first affine registration step in the sequence results in a blurred image because of considerable local shape differences. However, already the first nonrigid iteration in the sequence corrected for most of the shape differences among individuals, resulting in images rich in internal detail. A second iteration improved on that somewhat and was selected as the standard. Registering neurons from different preparations into the standard atlas reveals 1) that the m-ACT neuron occupies the entire glomerulus (cortex and core) and overlaps with a local interneuron in the cortical layer; 2) that, in the MB calyces and the lateral horn of the protocerebral lobe, the axon terminals of two identified m-ACT neurons arborize in separate but close areas of the neuropil; and 3) that MB-intrinsic clawed Kenyon cells (type 5), with somata

  12. Group averaging for de Sitter free fields

    Energy Technology Data Exchange (ETDEWEB)

    Marolf, Donald; Morrison, Ian A, E-mail: marolf@physics.ucsb.ed, E-mail: ian_morrison@physics.ucsb.ed [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2009-12-07

    Perturbative gravity about global de Sitter space is subject to linearization-stability constraints. Such constraints imply that quantum states of matter fields couple consistently to gravity only if the matter state has vanishing de Sitter charges, i.e. only if the state is invariant under the symmetries of de Sitter space. As noted by Higuchi, the usual Fock spaces for matter fields contain no de Sitter-invariant states except the vacuum, though a new Hilbert space of de Sitter-invariant states can be constructed via so-called group-averaging techniques. We study this construction for free scalar fields of arbitrary positive mass in any dimension, and for linear vector and tensor gauge fields in any dimension. Our main result is to show in each case that group averaging converges for states containing a sufficient number of particles. We consider general N-particle states with smooth wavefunctions, though we obtain somewhat stronger results when the wavefunctions are finite linear combinations of de Sitter harmonics. Along the way we obtain explicit expressions for general boost matrix elements in a familiar basis.

  13. Global atmospheric circulation statistics: Four year averages

    Science.gov (United States)

    Wu, M. F.; Geller, M. A.; Nash, E. R.; Gelman, M. E.

    1987-01-01

    Four year averages of the monthly mean global structure of the general circulation of the atmosphere are presented in the form of latitude-altitude, time-altitude, and time-latitude cross sections. The numerical values are given in tables. Basic parameters utilized include daily global maps of temperature and geopotential height for 18 pressure levels between 1000 and 0.4 mb for the period December 1, 1978 through November 30, 1982 supplied by NOAA/NMC. Geopotential heights and geostrophic winds are constructed using hydrostatic and geostrophic formulae. Meridional and vertical velocities are calculated using thermodynamic and continuity equations. Fields presented in this report are zonally averaged temperature, zonal, meridional, and vertical winds, and amplitude of the planetary waves in geopotential height with zonal wave numbers 1-3. The northward fluxes of sensible heat and eastward momentum by the standing and transient eddies along with their wavenumber decomposition and Eliassen-Palm flux propagation vectors and divergences by the standing and transient eddies along with their wavenumber decomposition are also given. Large interhemispheric differences and year-to-year variations are found to originate in the changes in the planetary wave activity.

  14. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  15. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  16. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  17. Edgeworth expansion for the pre-averaging estimator

    DEFF Research Database (Denmark)

    Podolskij, Mark; Veliyev, Bezirgen; Yoshida, Nakahiro

    In this paper, we study the Edgeworth expansion for a pre-averaging estimator of quadratic variation in the framework of continuous diffusion models observed with noise. More specifically, we obtain a second order expansion for the joint density of the estimators of quadratic variation and its as...... asymptotic variance. Our approach is based on martingale embedding, Malliavin calculus and stable central limit theorems for continuous diffusions. Moreover, we derive the density expansion for the studentized statistic, which might be applied to construct asymptotic confidence regions....

  18. Light-cone averages in a Swiss-cheese universe

    International Nuclear Information System (INIS)

    Marra, Valerio; Kolb, Edward W.; Matarrese, Sabino

    2008-01-01

    We analyze a toy Swiss-cheese cosmological model to study the averaging problem. In our Swiss-cheese model, the cheese is a spatially flat, matter only, Friedmann-Robertson-Walker solution (i.e., the Einstein-de Sitter model), and the holes are constructed from a Lemaitre-Tolman-Bondi solution of Einstein's equations. We study the propagation of photons in the Swiss-cheese model, and find a phenomenological homogeneous model to describe observables. Following a fitting procedure based on light-cone averages, we find that the expansion scalar is unaffected by the inhomogeneities (i.e., the phenomenological homogeneous model is the cheese model). This is because of the spherical symmetry of the model; it is unclear whether the expansion scalar will be affected by nonspherical voids. However, the light-cone average of the density as a function of redshift is affected by inhomogeneities. The effect arises because, as the universe evolves, a photon spends more and more time in the (large) voids than in the (thin) high-density structures. The phenomenological homogeneous model describing the light-cone average of the density is similar to the ΛCDM concordance model. It is interesting that, although the sole source in the Swiss-cheese model is matter, the phenomenological homogeneous model behaves as if it has a dark-energy component. Finally, we study how the equation of state of the phenomenological homogeneous model depends on the size of the inhomogeneities, and find that the equation-of-state parameters w 0 and w a follow a power-law dependence with a scaling exponent equal to unity. That is, the equation of state depends linearly on the distance the photon travels through voids. We conclude that, within our toy model, the holes must have a present size of about 250 Mpc to be able to mimic the concordance model

  19. Average resonance parameters evaluation for actinides

    Energy Technology Data Exchange (ETDEWEB)

    Porodzinskij, Yu.V.; Sukhovitskij, E.Sh. [Radiation Physics and Chemistry Problems Inst., Minsk-Sosny (Belarus)

    1997-03-01

    New evaluated <{Gamma}{sub n}{sup 0}> and values for {sup 238}U, {sup 237}Np, {sup 243}Cm, {sup 245}Cm, {sup 246}Cm and {sup 241}Am nuclei in the resolved resonance region are presented. The applied method based on the idea that experimental resonance missing results in correlated changes of reduced neutron widths and level spacings distributions is discussed. (author)

  20. Condition monitoring of gearboxes using synchronously averaged electric motor signals

    Science.gov (United States)

    Ottewill, J. R.; Orkisz, M.

    2013-07-01

    Due to their prevalence in rotating machinery, the condition monitoring of gearboxes is extremely important in the minimization of potentially dangerous and expensive failures. Traditionally, gearbox condition monitoring has been conducted using measurements obtained from casing-mounted vibration transducers such as accelerometers. A well-established technique for analyzing such signals is the synchronous signal average, where vibration signals are synchronized to a measured angular position and then averaged from rotation to rotation. Driven, in part, by improvements in control methodologies based upon methods of estimating rotor speed and torque, induction machines are used increasingly in industry to drive rotating machinery. As a result, attempts have been made to diagnose defects using measured terminal currents and voltages. In this paper, the application of the synchronous signal averaging methodology to electric drive signals, by synchronizing stator current signals with a shaft position estimated from current and voltage measurements is proposed. Initially, a test-rig is introduced based on an induction motor driving a two-stage reduction gearbox which is loaded by a DC motor. It is shown that a defect seeded into the gearbox may be located using signals acquired from casing-mounted accelerometers and shaft mounted encoders. Using simple models of an induction motor and a gearbox, it is shown that it should be possible to observe gearbox defects in the measured stator current signal. A robust method of extracting the average speed of a machine from the current frequency spectrum, based on the location of sidebands of the power supply frequency due to rotor eccentricity, is presented. The synchronous signal averaging method is applied to the resulting estimations of rotor position and torsional vibration. Experimental results show that the method is extremely adept at locating gear tooth defects. Further results, considering different loads and different

  1. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  2. Atomic configuration average simulations for plasma spectroscopy

    International Nuclear Information System (INIS)

    Kilcrease, D.P.; Abdallah, J. Jr.; Keady, J.J.; Clark, R.E.H.

    1993-01-01

    Configuration average atomic physics based on Hartree-Fock methods and an unresolved transition array (UTA) simulation theory are combined to provide a computationally efficient approach for calculating the spectral properties of plasmas involving complex ions. The UTA theory gives an overall representation for the many lines associated with a radiative transition from one configuration to another without calculating the fine structure in full detail. All of the atomic quantities required for synthesis of the spectrum are calculated in the same approximation and used to generate the parameters required for representation of each UTA, the populations of the various atomic states, and the oscillator strengths. We use this method to simulate the transmission of x-rays through an aluminium plasma. (author)

  3. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  4. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  5. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  6. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  7. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  8. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  9. Comparison of averaging techniques for the calculation of the 'European average exposure indicator' for particulate matter.

    Science.gov (United States)

    Brown, Richard J C; Woods, Peter T

    2012-01-01

    A comparison of various averaging techniques to calculate the Average Exposure Indicator (AEI) specified in European Directive 2008/50/EC for particulate matter in ambient air has been performed. This was done for data from seventeen sites around the UK for which PM(10) mass concentration data is available for the years 1998-2000 and 2008-2010 inclusive. The results have shown that use of the geometric mean produces significantly lower AEI values within the required three year averaging periods and slightly lower changes in the AEI value between the three year averaging periods than the use of the arithmetic mean. The use of weighted means in the calculation, using the data capture at each site as the weighting parameter, has also been tested and this is proposed as a useful way of taking account of the confidence of each data set.

  10. Quantum gravity unification via transfinite arithmetic and geometrical averaging

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    In E-Infinity theory, we have not only infinitely many dimensions but also infinitely many fundamental forces. However, due to the hierarchical structure of ε (∞) spacetime we have a finite expectation number for its dimensionality and likewise a finite expectation number for the corresponding interactions. Starting from the preceding fundamental principles and using the experimental findings as well as the theoretical value of the coupling constants of the electroweak and the strong forces we present an extremely simple averaging procedure for determining the quantum gravity unification coupling constant with and without super symmetry. The work draws heavily on previous results, in particular a paper, by the Slovian Prof. Marek-Crnjac [Marek-Crnjac L. On the unification of all fundamental forces in a fundamentally fuzzy Cantorian ε (∞) manifold and high energy physics. Chaos, Solitons and Fractals 2004;4:657-68

  11. Technological progress and average job matching quality

    Directory of Open Access Journals (Sweden)

    Mário Centeno

    2009-12-01

    Full Text Available Our objective is to study, in a labor market characterized by search frictions, the effect of technological progress on the average quality of job matches. For that, we use an extension of Mortensen and Pissarides (1998 and obtain as results that the effects of technological progress on the labor market depend upon the initial conditions of the economy. If the economy is totally characterized by the presence of low-quality job matches, an increase in technological progress is accompanied by an increase in the quality of jobs. In turn, if the economy is totally characterized by the presence of high-quality job matches, an increase in the technological progress rate implies the reverse effect. Finally, if the economy is totally characterized by the presence of very high-quality jobs, an increase in the technological progress rate implies an increase in the average quality of the job matches.O objetivo deste artigo é o de estudar, em um mercado de trabalho caracterizado por fricções, os efeitos do progresso tecnológico sobre a qualidade média das parcerias produtivas. Para tal, utilizamos uma extensão do modelo de Mortensen and Pissarides (1998 e obtivemos, como resultados, que os efeitos de variações na taxa de progresso tecnológico sobre o mercado de trabalho dependerão das condições da economia. Se a economia for totalmente caracterizada pela presença de parcerias produtivas de baixa qualidade, um aumento na taxa de progresso tecnológico vem acompanhado por um aumento na qualidade médias das parcerias produtivas. Por sua vez, se a economia for totalmente caracterizada pela presença de parcerias produtivas de alta qualidade, um aumento na taxa de progresso tecnológico gera um efeito inverso. Finalmente, se a economia for totalmente caracterizada pela presença de parcerias produtivas de muito alta qualidade, um aumento na taxa de progresso tecnológico virá acompanhado de uma elevação na qualidade média dos empregos.

  12. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  13. Vibrationally averaged dipole moments of methane and benzene isotopologues

    Energy Technology Data Exchange (ETDEWEB)

    Arapiraca, A. F. C. [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG (Brazil); Mohallem, J. R., E-mail: rachid@fisica.ufmg.br [Laboratório de Átomos e Moléculas Especiais, Departamento de Física, ICEx, Universidade Federal de Minas Gerais, P. O. Box 702, 30123-970 Belo Horizonte, MG (Brazil)

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  14. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  15. Dynamic time warping-based averaging framework for functional near-infrared spectroscopy brain imaging studies

    Science.gov (United States)

    Zhu, Li; Najafizadeh, Laleh

    2017-06-01

    We investigate the problem related to the averaging procedure in functional near-infrared spectroscopy (fNIRS) brain imaging studies. Typically, to reduce noise and to empower the signal strength associated with task-induced activities, recorded signals (e.g., in response to repeated stimuli or from a group of individuals) are averaged through a point-by-point conventional averaging technique. However, due to the existence of variable latencies in recorded activities, the use of the conventional averaging technique can lead to inaccuracies and loss of information in the averaged signal, which may result in inaccurate conclusions about the functionality of the brain. To improve the averaging accuracy in the presence of variable latencies, we present an averaging framework that employs dynamic time warping (DTW) to account for the temporal variation in the alignment of fNIRS signals to be averaged. As a proof of concept, we focus on the problem of localizing task-induced active brain regions. The framework is extensively tested on experimental data (obtained from both block design and event-related design experiments) as well as on simulated data. In all cases, it is shown that the DTW-based averaging technique outperforms the conventional-based averaging technique in estimating the location of task-induced active regions in the brain, suggesting that such advanced averaging methods should be employed in fNIRS brain imaging studies.

  16. Averaging out magnetic forces with fast rf sweeps in an optical trap for metastable chromium atoms

    Science.gov (United States)

    Beaufils, Q.; Chicireanu, R.; Pouderous, A.; de Souza Melo, W.; Laburthe-Tolra, B.; Maréchal, E.; Vernac, L.; Keller, J. C.; Gorceix, O.

    2008-05-01

    We introduce a time-averaged trap in which the internal state of the atoms is rapidly modulated to modify the magnetic trapping potential. In our experiment, fast radio-frequency linear sweeps flip the spins of atoms at a fast rate, which averages out the magnetic forces. We use this procedure to optimize the accumulation of metastable chromium atoms in an optical dipole trap from a magneto-optical trap. The potential experienced by the metastable atoms is identical to the bare optical dipole potential, so that this procedure allows for trapping all magnetic sublevels, hence increasing by up to 80% the final number of accumulated atoms.

  17. Interpreting Sky-Averaged 21-cm Measurements

    Science.gov (United States)

    Mirocha, Jordan

    2015-01-01

    Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation

  18. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  19. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    Science.gov (United States)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E

  20. On the Averaging of Cardiac Diffusion Tensor MRI Data: The Effect of Distance Function Selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-01-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) Metrics were judged by quantitative –rather than qualitative– criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the “swelling effect” occurrence following Euclidean averaging was found to be too unimportant to be worth consideration. PMID:27754986

  1. On the averaging of cardiac diffusion tensor MRI data: the effect of distance function selection

    Science.gov (United States)

    Giannakidis, Archontis; Melkus, Gerd; Yang, Guang; Gullberg, Grant T.

    2016-11-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) allows a unique insight into the microstructure of highly-directional tissues. The selection of the most proper distance function for the space of diffusion tensors is crucial in enhancing the clinical application of this imaging modality. Both linear and nonlinear metrics have been proposed in the literature over the years. The debate on the most appropriate DT-MRI distance function is still ongoing. In this paper, we presented a framework to compare the Euclidean, affine-invariant Riemannian and log-Euclidean metrics using actual high-resolution DT-MRI rat heart data. We employed temporal averaging at the diffusion tensor level of three consecutive and identically-acquired DT-MRI datasets from each of five rat hearts as a means to rectify the background noise-induced loss of myocyte directional regularity. This procedure is applied here for the first time in the context of tensor distance function selection. When compared with previous studies that used a different concrete application to juxtapose the various DT-MRI distance functions, this work is unique in that it combined the following: (i) metrics were judged by quantitative—rather than qualitative—criteria, (ii) the comparison tools were non-biased, (iii) a longitudinal comparison operation was used on a same-voxel basis. The statistical analyses of the comparison showed that the three DT-MRI distance functions tend to provide equivalent results. Hence, we came to the conclusion that the tensor manifold for cardiac DT-MRI studies is a curved space of almost zero curvature. The signal to noise ratio dependence of the operations was investigated through simulations. Finally, the ‘swelling effect’ occurrence following Euclidean averaging was found to be too unimportant to be worth consideration.

  2. An Investigation of Wavelet Average Framing LPC for Noisy Speaker Identification Environment

    Directory of Open Access Journals (Sweden)

    Khaled Daqrouq

    2015-01-01

    Full Text Available In the presented research paper, an average framing linear prediction coding (AFLPC method for a text-independent speaker identification system is studied. AFLPC was proposed in our previous work. Generally, linear prediction coding (LPC has been used in numerous speech recognition tasks. Here, an investigative procedure was based on studying the AFLPC speaker recognition system in a noisy environment. In the stage of feature extraction, the speaker-specific resonances of the vocal tract were extracted using the AFLPC technique. In the phase of classification, a probabilistic neural network (PNN and Bayesian classifier (BC were applied for comparison. In the performed investigation, the quality of different wavelet transforms with AFLPC techniques was compared with each other. In addition, the capability analysis of the proposed system was examined for comparison with other systems suggested in the literature. In response to an achieved experimental result in a noisy environment, the PNN classifier could have a better performance with the fusion of wavelets and AFLPC as a feature extraction technique termed WFALPCF.

  3. Development of an Advanced Flow Meter using the Averaging Bi-directional Flow Tube

    Energy Technology Data Exchange (ETDEWEB)

    Baek, W. P.; Yun, B. J.; Kang, K. H.; Uh, D. J.; Chun, S. Y.; Kim, B. D.; Yun, Y. J.; Sung, S. H.; Song, C. H

    2006-01-15

    Advanced flow meter using the concept of averaging bi-directional flow tube was developed. To find characteristics of flow meter and derive theory of measurement in the single and two phase flow condition, some basic tests were attempted using flow meters with diameters of 27, 80 and 200 mm. The CFD(computational fluid dynamics) calculation was also performed to find the effects of temperature and pressure, and to optimize design of a prototypic flow meter. Following this procedure, prototypical flow meters with diameters of 200 and 500 mm were designed and manufactured. It is aimed to use in the region in which calibration constant was unchanged. The stress analysis showed that the proposed flow meter of H-beam shape is inherently strong against the bending force induced by flow. The flow computer was developed for the flow rate calculation from the measured pressure difference. In this study, the performance test using this prototype flow meter was carried out. The developed flow meter can be applied in the wide range of pressure and temperature. The basic tests showed that the lineality of the proposed flow meter is {+-} 0.5 % of full scale and flow turn down ratio is 1:20 where the Reynolds number is larger than 10,000.

  4. Direct determination approach for the multifractal detrending moving average analysis

    Science.gov (United States)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  5. Applications of ordered weighted averaging (OWA operators in environmental problems

    Directory of Open Access Journals (Sweden)

    Carlos Llopis-Albert

    2017-04-01

    Full Text Available This paper presents an application of a prioritized weighted aggregation operator based on ordered weighted averaging (OWA to deal with stakeholders' constructive participation in water resources projects. They have different degree of acceptance or preference regarding the measures and policies to be carried out, which lead to different environmental and socio-economic outcomes, and hence, to different levels of stakeholders’ satisfaction. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology establishes a prioritization relationship upon the stakeholders, which preferences are aggregated by means of weights depending on the satisfaction of the higher priority policy maker. The methodology has been successfully applied to a Public Participation Project (PPP in watershed management, thus obtaining efficient environmental measures in conflict resolution problems under actors’ preference uncertainties.

  6. The effects of average revenue regulation on electricity transmission investment and pricing

    International Nuclear Information System (INIS)

    Matsukawa, Isamu

    2008-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two-part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist's expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occurs, average revenue regulation is allocatively more efficient than a Coasian two-part tariff if the level of capacity under average revenue regulation is higher than that under a Coasian two-part tariff. (author)

  7. Large interface simulation in an averaged two-fluid code

    International Nuclear Information System (INIS)

    Henriques, A.

    2006-01-01

    Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr

  8. International standards and quality control procedures applied to nuclear instruments

    International Nuclear Information System (INIS)

    Urbanski, P.

    2008-01-01

    The survey of international standards related to Nuclear Instrumentation and QC tests was presented. From among the 29'336 active international standards published by such organizations as ISO, IEC, CEN and CENELEC, only 582 are devoted to nuclear instruments. The international classification of standards (ICS) is shown. Also, the list of 582 international standards related to nuclear instruments is attached. (author)

  9. General aviation aircraft design: applied methods and procedures

    National Research Council Canada - National Science Library

    Gudmundsson, Snorri

    2014-01-01

    .... Readers will find it a valuable guide to topics such as sizing of horizontal and vertical tails to minimize drag, sizing of lifting surfaces to ensure proper dynamic stability, numerical performance...

  10. Site Averaged Neutron Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the neutron probe soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  11. Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged for each...

  12. Site Averaged Gravimetric Soil Moisture: 1987-1989 (Betts)

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: Site averaged product of the gravimetric soil moisture collected during the 1987-1989 FIFE experiment. Samples were averaged for each site, then averaged...

  13. Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape

    Energy Technology Data Exchange (ETDEWEB)

    Royer, MP [Pacific Northwest National Laboratory, Portland, OR, USA; Wilkerson, A. [Pacific Northwest National Laboratory, Portland, OR, USA; Wei, M. [The Hong Kong Polytechnic University, Hong Kong, China; Houser, K. [The Pennsylvania State University, University Park, PA, USA; Davis, R. [Pacific Northwest National Laboratory, Portland, OR, USA

    2016-08-10

    An experiment was conducted to evaluate how subjective impressions of color quality vary with changes in average fidelity, average gamut, and gamut shape (which considers the specific hues that are saturated or desaturated). Twenty-eight participants each evaluated 26 lighting conditions—created using four, seven-channel, tunable LED luminaires—in a 3.1 m by 3.7 m room filled with objects selected to cover a range of hue, saturation, and lightness. IES TM-30 fidelity index (Rf) values ranged from 64 to 93, IES TM-30 gamut index (Rg¬) values from 79 to 117, and IES TM-30 Rcs,h1 values (a proxy for gamut shape) from -19% to 26%. All lighting conditions delivered the same nominal illuminance and chromaticity. Participants were asked to rate each condition on eight point semantic differential scales for saturated-dull, normal-shifted, and like-dislike. They were also asked one multiple choice question, classifying the condition as saturated, dull, normal, or shifted. The findings suggest that gamut shape is more important than average gamut for human preference, where reds play a more important role than other hues. Additionally, average fidelity alone is a poor predictor of human perceptions, although Rf was somewhat better than CIE Ra. The most preferred source had a CIE Ra value of 68, and 9 of the top 12 rated products had a CIE Ra value of 73 or less, which indicates that the commonly used criteria of CIE Ra ≥ 80 may be excluding a majority of preferred light sources.

  14. Diagnostic reference level: an important tool for reducing radiation doses in adult and pediatric nuclear medicine procedures in Brazil.

    Science.gov (United States)

    Willegaignon, José; Braga, Luis F E F; Sapienza, Marcelo T; Coura-Filho, George B; Cardona, Marissa A R; Alves, Carlos E R; Gutterres, Ricardo F; Buchpiguel, Carlos A

    2016-05-01

    This study aimed to establish a concise method for determining a diagnostic reference level (DRL) for adult and pediatric nuclear medicine patients on the basis of diagnostic procedures and administered radioisotope as a means of controlling medical exposure. A screening was carried out in all Brazilian Nuclear Medicine Service (NMS) establishments to support this study by collecting the average activities administered during adult diagnostic procedures and the rules applied to adjust these according to the patient's age and body mass. Percentile 75 was used in all the activities administered as a means of establishing DRL for adult patients, with additional correction factors for pediatric patients. Radiation doses from nuclear medicine procedures on the basis of average administered activity were calculated for all diagnostic exams. A total of 107 NMSs in Brazil agreed to participate in the project. From the 64 nuclear medicine procedures studied, bone, kidney, and parathyroid scans were found to be used in more than 85% of all the NMSs analyzed. There was a large disparity among the activities administered, when applying the same procedures, this reaching, in some cases, more than 20 times between the lowest and the highest. Diagnostic exams based on Ga, Tl, and I radioisotopes proved to be the major exams administering radiation doses to patients. On introducing the DRL concept into clinical routine, the minimum reduction in radiation doses received by patients was about 15%, the maximum was 95%, and the average was 50% compared with the previously reported administered activities. Variability in the available diagnostic procedures as well as in the amount of activities administered within the same procedure was appreciable not only in Brazil, but worldwide. Global efforts are needed to establish a concise DRL that can be applied in adult and pediatric nuclear medicine procedures as the application of DRL in clinical routine has been proven to be an important

  15. An integrated approach to investigate the reach-averaged bend scale dynamics of large meandering rivers

    Science.gov (United States)

    Monegaglia, Federico; Henshaw, Alex; Zolezzi, Guido; Tubino, Marco

    2016-04-01

    Planform development of evolving meander bends is a beautiful and complex dynamic phenomenon, controlled by the interplay among hydrodynamics, sediments and floodplain characteristics. In the past decades, morphodynamic models of river meandering have provided a thorough understanding of the unit physical processes interacting at the reach scale during meander planform evolution. On the other hand, recent years have seen advances in satellite geosciences able to provide data with increasing resolution and earth coverage, which are becoming an important tool for studying and managing river systems. Analysis of the planform development of meandering rivers through Landsat satellite imagery have been provided in very recent works. Methodologies for the objective and automatic extraction of key river development metrics from multi-temporal satellite images have been proposed though often limited to the extraction of channel centerlines, and not always able to yield quantitative data on channel width, migration rates and bed morphology. Overcoming such gap would make a major step forward to integrate morphodynamic theories, models and real-world data for an increased understanding of meandering river dynamics. In order to fulfill such gaps, a novel automatic procedure for extracting and analyzing the topography and planform dynamics of meandering rivers through time from satellite images is implemented. A robust algorithm able to compute channel centerline in complex contexts such as the presence of channel bifurcations and anabranching structures is used. As a case study, the procedure is applied to the Landsat database for a reach of the well-known case of Rio Beni, a large, suspended load dominated, tropical meandering river flowing through the Bolivian Amazon Basin. The reach-averaged evolution of single bends along Rio Beni over a 30 years period is analyzed, in terms of bend amplification rates computed according to the local centerline migration rate. A

  16. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  17. Trends and the determination of effective doses for standard X-ray procedures

    International Nuclear Information System (INIS)

    Johnson, H.M.; Neduzak, C.; Gallet, J.; Sandeman, J.

    2001-01-01

    Trends in the entrance skin exposures (air kerma) for standard x-ray imaging procedures are reported for the Province of Manitoba, Canada. Average annual data per procedure using standard phantoms and standard ion chambers have been recorded since 1981. For example, chest air kerma (backscatter included) has decreased from 0.14 to 0.09 mGy. Confounding factors may negate the gains unless facility quality control programs are maintained. The data were obtained for a quality assurance and regulatory compliance program. Quoting such data for risk evaluation purposes lacks rigor hence a compartment model for organ apportioning, using organ absorbed doses and weighting factors, has been applied to determine effective dose per procedure. The effective doses for the standard procedures are presented, including the value of 0.027 mSv (1999) calculated for the effective dose in PA chest imaging. (author)

  18. Applied in vitro radio bioassay

    International Nuclear Information System (INIS)

    Gaburo, J.C.G.; Sordi, G.M.A.A.

    1992-11-01

    The aim of this publication is to show the concepts and in vitro bioassay techniques as well as experimental procedures related with internal contamination evaluation. The main routes of intake, metabolic behavior, and the possible types of bioassay samples that can be collected for radionuclides analysis are described. Both biological processes and the chemical and physical behavior of the radioactive material of interest are considered and the capabilities of analytical techniques to detect and quantify the radionuclides are discussed. Next, the need of quality assurance throughout procedures are considered and finally a summary of the techniques applied to the internal routine monitoring of IPEN workers is given. (author)

  19. NGA-West 2 GMPE average site coefficients for use in earthquake-resistant design

    Science.gov (United States)

    Borcherdt, Roger D.

    2015-01-01

    Site coefficients corresponding to those in tables 11.4–1 and 11.4–2 of Minimum Design Loads for Buildings and Other Structures published by the American Society of Civil Engineers (Standard ASCE/SEI 7-10) are derived from four of the Next Generation Attenuation West2 (NGA-W2) Ground-Motion Prediction Equations (GMPEs). The resulting coefficients are compared with those derived by other researchers and those derived from the NGA-West1 database. The derivation of the NGA-W2 average site coefficients provides a simple procedure to update site coefficients with each update in the Maximum Considered Earthquake Response MCER maps. The simple procedure yields average site coefficients consistent with those derived for site-specific design purposes. The NGA-W2 GMPEs provide simple scale factors to reduce conservatism in current simplified design procedures.

  20. Licensing procedures for older drivers.

    Science.gov (United States)

    2013-09-01

    This study examined the driver licensing procedures in all 50 States as they apply to the older (65+) driver. A literature review examined reports of possible declines in older driver capabilities and the ability of a driver licensing agency to scree...

  1. 20 CFR 226.62 - Computing average monthly compensation.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  2. On the evaluation of Hardy's thermomechanical quantities using ensemble and time averaging

    International Nuclear Information System (INIS)

    Fu, Yao; To, Albert C

    2013-01-01

    An ensemble averaging approach was investigated for its accuracy and convergence against time averaging in computing continuum quantities such as stress, heat flux and temperature from atomistic scale quantities. For this purpose, ensemble averaging and time averaging were applied to evaluate Hardy's thermomechanical expressions (Hardy 1982 J. Chem. Phys. 76 622–8) in equilibrium conditions at two different temperatures as well as a nonequilibrium process due to shock impact on a Ni crystal modeled using molecular dynamics simulations. It was found that under equilibrium conditions, time averaging requires selection of a time interval larger than the critical time interval to obtain convergence, where the critical time interval can be estimated using the elastic properties of the material. The reason for this is because of the significant correlations among the computed thermomechanical quantities at different time instants employed in computing their time average. On the other hand, the computed thermomechanical quantities from different realizations in ensemble averaging are statistically independent, and thus convergence is always guaranteed. The computed stress, heat flux and temperature show noticeable difference in their convergence behavior while their confidence intervals increase with temperature. Contrary to equilibrium settings, time averaging is not equivalent to ensemble averaging in the case of shock wave propagation. Time averaging was shown to have poor performance in computing various thermomechanical fields by either oversmoothing the fields or failing to remove noise. (paper)

  3. Evidence on a Real Business Cycle Model with Neutral and Investment-Specific Technology Shocks using Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2010-01-01

    textabstractThe empirical support for a real business cycle model with two technology shocks is evaluated using a Bayesian model averaging procedure. This procedure makes use of a finite mixture of many models within the class of vector autoregressive (VAR) processes. The linear VAR model is

  4. Environmental protection and procedural law

    International Nuclear Information System (INIS)

    Mutschler, U.

    1978-01-01

    For the power industry which is 'independent of licensing', the Ule/Laubinger statement as well as its discussion on the 52th German legal experts' day are of considerable importance. It is therefore absolutely necessary to critically investigate the statements of this expert's opinion and the considerations on which they are based. This investigation is limited to those licensing procedures which in the terminology of experts, are 'similar to the plan approval procedure'. This applies mainly to the procedures according to paragraph 4 ff of the Federal Act on the Protection Against Nuisances and paragraph 7 of the Atomic Energy Law: Preliminaries publication of documents, inspection of files, public hearing, taking of evidence, persons with special responsibilities, administrative proceedings, actions by associations. The deficiencies in the execution of environmental procedural law is briefly mentioned. The notes in the article refer only to air pollution. (orig./HP) [de

  5. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  6. Multifractal detrending moving-average cross-correlation analysis.

    Science.gov (United States)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents h(xy) extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of h(xy)(q) since its h(xy)(2) is closest to 0.5, as expected, and

  7. 40 CFR 152.40 - Who may apply.

    Science.gov (United States)

    2010-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS PESTICIDE REGISTRATION AND CLASSIFICATION PROCEDURES Registration Procedures § 152.40 Who may apply. Any person may apply for new registration of a pesticide product. Any registrant may apply for amendment of the...

  8. The Effects of Average Revenue Regulation on Electricity Transmission Investment and Pricing

    OpenAIRE

    Isamu Matsukawa

    2005-01-01

    This paper investigates the long-run effects of average revenue regulation on an electricity transmission monopolist who applies a two- part tariff comprising a variable congestion price and a non-negative fixed access fee. A binding constraint on the monopolist fs expected average revenue lowers the access fee, promotes transmission investment, and improves consumer surplus. In a case of any linear or log-linear electricity demand function with a positive probability that no congestion occur...

  9. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    -way chemometrical methods, such as PCA and PARAFAC models for analysing spatial and depth profiles of sea water samples, defined by three data modes: depth, variables and geographical location. Emphasis was also put on predicting fluorescence values, as being a natural measure of biological activity, by applying...... if contamination in the data is present. For this becoming a standard procedure, further work is required, aiming at implementing reliable robust algorithms into standard statistical programs.......The general aim of the thesis was to contribute to the improvement of data analytical techniques within the chemometric field. Regardless the multivariate structure of the data, it is still common in some fields to perform uni-variate data analysis using only simple statistics such as sample mean...

  10. Relationships between feeding behavior and average daily gain in cattle

    Directory of Open Access Journals (Sweden)

    Bruno Fagundes Cunha Lage

    2013-12-01

    Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (procedure (SAS 9.3. The model included animal and residue as random effects and the fixed effects of ADG class (1, 2 and 3 and age at the middle of test as a covariate. Low gain animals remained 21.8% less time of head down than medium or high gain animals (P<0.05. Were observed significant effects of ADG class on FR (P<0.01, high ADG animals consumed more feed per time (g.min-1 than the low and medium ADG animals. No diferences were observed (P>0.05 among ADG classes for FV, indicating that these traits are not related to each other. These results shows that the ADG is related to the agility in eat food and not to the time spent in the bunk or to the number of visits in a range of 24 hours.

  11. Regulations and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Lydia J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2011-07-25

    The purpose of the Regulations and Procedures Manual (RPM) is to provide LBNL personnel with a reference to University and Lawrence Berkeley National Laboratory (LBNL or Laboratory) policies and regulations by outlining normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory organizations. Much of the information in this manual has been condensed from detail provided in LBNL procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. RPM sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the LBNL organization responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which organization is responsible for a policy, please contact Requirements Manager Lydia Young or the RPM Editor.

  12. Regulations and Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Lydia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2010-09-30

    The purpose of the Regulations and Procedures Manual (RPM) is to provide Laboratory personnel with a reference to University and Lawrence Berkeley National Laboratory policies and regulations by outlining the normal practices and answering most policy questions that arise in the day-to-day operations of Laboratory departments. Much of the information in this manual has been condensed from detail provided in Laboratory procedure manuals, Department of Energy (DOE) directives, and Contract DE-AC02-05CH11231. This manual is not intended, however, to replace any of those documents. The sections on personnel apply only to employees who are not represented by unions. Personnel policies pertaining to employees represented by unions may be found in their labor agreements. Questions concerning policy interpretation should be directed to the department responsible for the particular policy. A link to the Managers Responsible for RPM Sections is available on the RPM home page. If it is not clear which department should be called, please contact the Associate Laboratory Director of Operations.

  13. Program Baseline Change Control Procedure

    International Nuclear Information System (INIS)

    1993-02-01

    This procedure establishes the responsibilities and process for approving initial issues of and changes to the technical, cost, and schedule baselines, and selected management documents developed by the Office of Civilian Radioactive Waste Management (OCRWM) for the Civilian Radioactive Waste Management System. This procedure implements the OCRWM Baseline Management Plan and DOE Order 4700.1, Chg 1. It streamlines the change control process to enhance integration, accountability, and traceability of Level 0 and Level I decisions through standardized Baseline Change Proposal (BCP) forms to be used by the Level 0, 1, 2, and 3 Baseline Change Control Boards (BCCBs) and to be tracked in the OCRWM-wide Configuration Information System (CIS) Database.This procedure applies to all technical, cost, and schedule baselines controlled by the Energy System Acquisition Advisory Board (ESAAB) BCCB (Level 0) and, OCRWM Program Baseline Control Board (PBCCB) (Level 1). All baseline BCPs initiated by Level 2 or lower BCCBs, which require approval from ESAAB or PBCCB, shall be processed in accordance with this procedure. This procedure also applies to all Program-level management documents controlled by the OCRWM PBCCB

  14. 77 FR 7237 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2012-02-10

    ... Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY: In a decision served on... productivity for the 2006-2010 (5-year) averaging period. This represents a 0.6% decrease over the average for...

  15. 78 FR 10262 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2013-02-13

    ... Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed railroad cost recovery procedures productivity adjustment. SUMMARY: In a decision served on... productivity for the 2007-2011 (5-year) averaging period. This represents a 0.1% increase over the average for...

  16. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  17. Using tomography of GPS TEC to routinely determine ionospheric average electron density profiles

    Science.gov (United States)

    Yizengaw, E.; Moldwin, M. B.; Dyson, P. L.; Essex, E. A.

    2007-03-01

    This paper introduces a technique that calculates average electron density (Ne) profiles over a wide geographic area of coverage, using tomographic ionospheric Ne profiles. These Ne profiles, which can provide information of the Ne distribution up to global positioning system (GPS) orbiting altitude (with the coordination of space-based GPS tomographic profiles), can be incorporated into the next generation of the international reference ionosphere (IRI) model. An additional advantage of tomography is that it enables accurate modeling of the topside ionosphere. By applying the tomographic reconstruction approach to ground-based GPS slant total electron content (STEC), we calculate 3-h average Ne profiles over a wide region. Since it uses real measurement data, tomographic average Ne profiles describe the ionosphere during quiet and disturbed periods. The computed average Ne profiles are compared with IRI model profiles and average Ne profiles obtained from ground-based ionosondes.

  18. Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method

    Science.gov (United States)

    Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.

    2017-08-01

    This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.

  19. Principles of resonance-averaged gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1981-01-01

    The unambiguous determination of excitation energies, spins, parities, and other properties of nuclear levels is the paramount goal of the nuclear spectroscopist. All developments of nuclear models depend upon the availability of a reliable data base on which to build. In this regard, slow neutron capture gamma-ray spectroscopy has proved to be a valuable tool. The observation of primary radiative transitions connecting initial and final states can provide definite level positions. In particular the use of the resonance-averaged capture technique has received much recent attention because of the claims advanced for this technique (Chrien 1980a, Casten 1980); that it is able to identify all states in a given spin-parity range and to provide definite spin parity information for these states. In view of the importance of this method, it is perhaps surprising that until now no firm analytical basis has been provided which delineates its capabilities and limitations. Such an analysis is necessary to establish the spin-parity assignments derived from this method on a quantitative basis; in other words a quantitative statement of the limits of error must be provided. It is the principal aim of the present paper to present such an analysis. To do this, a historical description of the technique and its applications is presented and the principles of the method are stated. Finally a method of statistical analysis is described, and the results are applied to recent measurements carried out at the filtered beam facilities at the Brookhaven National Laboratory

  20. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  1. The flattening of the average potential in models with fermions

    International Nuclear Information System (INIS)

    Bornholdt, S.

    1993-01-01

    The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)

  2. Development of a stacked ensemble model for forecasting and analyzing daily average PM2.5 concentrations in Beijing, China.

    Science.gov (United States)

    Zhai, Binxu; Chen, Jianguo

    2018-04-18

    A stacked ensemble model is developed for forecasting and analyzing the daily average concentrations of fine particulate matter (PM 2.5 ) in Beijing, China. Special feature extraction procedures, including those of simplification, polynomial, transformation and combination, are conducted before modeling to identify potentially significant features based on an exploratory data analysis. Stability feature selection and tree-based feature selection methods are applied to select important variables and evaluate the degrees of feature importance. Single models including LASSO, Adaboost, XGBoost and multi-layer perceptron optimized by the genetic algorithm (GA-MLP) are established in the level 0 space and are then integrated by support vector regression (SVR) in the level 1 space via stacked generalization. A feature importance analysis reveals that nitrogen dioxide (NO 2 ) and carbon monoxide (CO) concentrations measured from the city of Zhangjiakou are taken as the most important elements of pollution factors for forecasting PM 2.5 concentrations. Local extreme wind speeds and maximal wind speeds are considered to extend the most effects of meteorological factors to the cross-regional transportation of contaminants. Pollutants found in the cities of Zhangjiakou and Chengde have a stronger impact on air quality in Beijing than other surrounding factors. Our model evaluation shows that the ensemble model generally performs better than a single nonlinear forecasting model when applied to new data with a coefficient of determination (R 2 ) of 0.90 and a root mean squared error (RMSE) of 23.69μg/m 3 . For single pollutant grade recognition, the proposed model performs better when applied to days characterized by good air quality than when applied to days registering high levels of pollution. The overall classification accuracy level is 73.93%, with most misclassifications made among adjacent categories. The results demonstrate the interpretability and generalizability of

  3. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  4. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  5. Stochastic Simulation of Hourly Average Wind Speed in Umudike ...

    African Journals Online (AJOL)

    Ten years of hourly average wind speed data were used to build a seasonal autoregressive integrated moving average (SARIMA) model. The model was used to simulate hourly average wind speed and recommend possible uses at Umudike, South eastern Nigeria. Results showed that the simulated wind behaviour was ...

  6. Average Weekly Alcohol Consumption: Drinking Percentiles for American College Students.

    Science.gov (United States)

    Meilman, Philip W.; And Others

    1997-01-01

    Reports a study that examined the average number of alcoholic drinks that college students (N=44,433) consumed per week. Surveys indicated that most students drank little or no alcohol on an average weekly basis. Only about 10% of the students reported consuming an average of 15 drinks or more per week. (SM)

  7. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  8. Averaging Tesseral Effects: Closed Form Relegation versus Expansions of Elliptic Motion

    Directory of Open Access Journals (Sweden)

    Martin Lara

    2013-01-01

    Full Text Available Longitude-dependent terms of the geopotential cause nonnegligible short-period effects in orbit propagation of artificial satellites. Hence, accurate analytical and semianalytical theories must cope with tesseral harmonics. Modern algorithms for dealing analytically with them allow for closed form relegation. Nevertheless, current procedures for the relegation of tesseral effects from subsynchronous orbits are unavoidably related to orbit eccentricity, a key fact that is not enough emphasized and constrains application of this technique to small and moderate eccentricities. Comparisons with averaging procedures based on classical expansions of elliptic motion are carried out, and the pros and cons of each approach are discussed.

  9. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  10. Utilisation of pathology procedures in the South African private ...

    African Journals Online (AJOL)

    The average cost per active beneficiary per month varied between R494 and R611 in 2005. A relatively few common test procedures (30) contributed disproportionately to the total number of procedures (67.8%) and cost (56.9%) of laboratory testing. The utilisation of individual procedures varied between laboratories with ...

  11. New procedure for departure formalities

    CERN Multimedia

    HR & GS Departments

    2011-01-01

    As part of the process of simplifying procedures and rationalising administrative processes, the HR and GS Departments have introduced new personalised departure formalities on EDH. These new formalities have applied to students leaving CERN since last year and from 17 October 2011 this procedure will be extended to the following categories of CERN personnel: Staff members, Fellows and Associates. It is planned to extend this electronic procedure to the users in due course. What purpose do departure formalities serve? The departure formalities are designed to ensure that members of the personnel contact all the relevant services in order to return any necessary items (equipment, cards, keys, dosimeter, electronic equipment, books, etc.) and are aware of all the benefits to which they are entitled on termination of their contract. The new departure formalities on EDH have the advantage of tailoring the list of services that each member of the personnel must visit to suit his individual contractual and p...

  12. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  13. Exponentially Weighted Moving Average Chart as a Suitable Tool for Nuchal Translucency Quality Review

    Czech Academy of Sciences Publication Activity Database

    Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana

    2014-01-01

    Roč. 34, č. 4 (2014), s. 367-376 ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014

  14. Approximative Krieger-Nelkin orientation averaging and anisotropy of water molecules vibrations

    International Nuclear Information System (INIS)

    Markovic, M.I.

    1974-01-01

    Quantum-mechanics approach of water molecules dynamics should be taken into account for precise theoretical calculation of differential scattering cross sections of neutrons. Krieger and Nelkin have proposed an approximate method for averaging orientation of molecules regarding directions of incoming and scattered neutron. This paper shows that this approach can be successfully applied for general shape of water molecule vibration anisotropy

  15. Actuator disk model of wind farms based on the rotor average wind speed

    DEFF Research Database (Denmark)

    Han, Xing Xing; Xu, Chang; Liu, De You

    2016-01-01

    Due to difficulty of estimating the reference wind speed for wake modeling in wind farm, this paper proposes a new method to calculate the momentum source based on the rotor average wind speed. The proposed model applies volume correction factor to reduce the influence of the mesh recognition...

  16. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  17. Procedures for analyzing the effectiveness of siren systems for alerting the public

    International Nuclear Information System (INIS)

    Keast, D.N.; Towers, D.A.; Anderson, G.S.; Kenoyer, J.L.; Desrosiers, A.E.

    1982-09-01

    NUREG-0654, Revision 1 (Criteria for Preparation and Evaluation of Radiological Emergency Response Plans and Preparedness in Support of Nuclear Power Plants), Appendix 3, discusses requirements of the licensees to implement a prompt notification system within the 10-mile emergency planning zone (EPZ) surrounding a nuclear facility. Sirens are being installed for use as part of or as the entire notification system by many licensees. This report describes a procedure for predicting siren system effectiveness under defined conditions within the EPZ's. The procedure requires a good topographical map and knowledge of the meteorology, demographics, and human activity patterns within the EPZ. The procedure is intended to be applied to systems of sirens and to obtain average results for a large number (30 or more) of listener locations

  18. Subdiffusion in time-averaged, confined random walks.

    Science.gov (United States)

    Neusius, Thomas; Sokolov, Igor M; Smith, Jeremy C

    2009-07-01

    Certain techniques characterizing diffusive processes, such as single-particle tracking or molecular dynamics simulation, provide time averages rather than ensemble averages. Whereas the ensemble-averaged mean-squared displacement (MSD) of an unbounded continuous time random walk (CTRW) with a broad distribution of waiting times exhibits subdiffusion, the time-averaged MSD, delta2, does not. We demonstrate that, in contrast to the unbounded CTRW, in which delta2 is linear in the lag time Delta, the time-averaged MSD of the CTRW of a walker confined to a finite volume is sublinear in Delta, i.e., for long lag times delta2 approximately Delta1-alpha. The present results permit the application of CTRW to interpret time-averaged experimental quantities.

  19. Vygotsky in applied neuropsychology

    Directory of Open Access Journals (Sweden)

    Glozman J. M.

    2016-12-01

    Full Text Available The aims of this paper are: 1 to show the role of clinical experience for the theoretical contributions of L.S. Vygotsky, and 2 to analyze the development of these theories in contemporary applied neuropsychology. An analysis of disturbances of mental functioning is impossible without a systemic approach to the evidence observed. Therefore, medical psychology is fundamental for forming a systemic approach to psychology. The assessment of neurological patients at the neurological hospital of Moscow University permitted L.S. Vygotsky to create, in collaboration with A.R. Luria, the theory of systemic dynamic localization of higher mental functions and their relationship to cultural conditions. In his studies of patients with Parkinson’s disease, Vygotsky also set out 3 steps of systemic development: interpsychological, then extrapsychological, then intrapsychological. L.S. Vygotsky and A.R. Luria in the late 1920s created a program to compensate for the motor subcortical disturbances in Parkinson’s disease (PD through a cortical (visual mediation of movements. We propose to distinguish the objective mediating factors — like teaching techniques and modalities — from subjective mediating factors, like the individual’s internal representation of his/her own disease. The cultural-historical approach in contemporary neuropsychology forces neuropsychologists to re-analyze and re-interpret the classic neuropsychological syndromes; to develop new assessment procedures more in accordance with the patient’s conditions of life; and to reconsider the concept of the social brain as a social and cultural determinant and regulator of brain functioning. L.S. Vygotsky and A.R. Luria proved that a defect interferes with a child’s appropriation of his/her culture, but cultural means can help the child overcome the defect. In this way, the cultural-historical approach became, and still is, a methodological basis for remedial education.

  20. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  1. 5 CFR 870.1103 - Election procedures.

    Science.gov (United States)

    2010-01-01

    ... (CONTINUED) FEDERAL EMPLOYEES' GROUP LIFE INSURANCE PROGRAM Living Benefits § 870.1103 Election procedures. (a) The insured individual must request information on Living Benefits and an application form directly from OFEGLI. (b)(1) Only the insured individual can apply for a Living Benefit; no one can apply...

  2. Applied and Professional Ethics

    OpenAIRE

    Collste, Göran

    2012-01-01

    The development of applied ethics in recent decades has had great significance for philosophy and society. In this article, I try to characterise this field of philosophical inquiry. I also discuss the relation of applied ethics to social policy and to professional ethics. In the first part, I address the following questions: What is applied ethics? When and why did applied ethics appear? How do we engage in applied ethics? What are the methods? In the second part of the article, I introduce...

  3. Estimation of average bioburden values on flexible gastrointestinal ...

    African Journals Online (AJOL)

    Infections related to flexible endoscopic procedures are caused by either endogenous flora or exogenous microbes. The first major challenge of reprocessing is infection control, most episodes of infection can be traced to procedural errors in cleaning and disinfecting, the second major challenge is to protect personnel and ...

  4. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    http://www.ias.ac.in/article/fulltext/pram/079/03/0493-0499. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. Abstract. Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average ...

  5. 20 CFR 404.221 - Computing your average monthly wage.

    Science.gov (United States)

    2010-04-01

    ... is rounded down to $502. (e) “Deemed” average monthly wage for certain deceased veterans of World War II. Certain deceased veterans of World War II are “deemed” to have an average monthly wage of $160... years; or (iii) 1974, we count the years beginning with 1951 and ending with the year before you reached...

  6. Averaged EMG profiles in jogging and running at different speeds

    NARCIS (Netherlands)

    Gazendam, Marnix G. J.; Hof, At L.

    EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed

  7. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average (TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video...

  8. The Average Covering Tree Value for Directed Graph Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.

    2012-01-01

    Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all

  9. Interpreting Bivariate Regression Coefficients: Going beyond the Average

    Science.gov (United States)

    Halcoussis, Dennis; Phillips, G. Michael

    2010-01-01

    Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…

  10. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Science.gov (United States)

    2010-04-01

    ... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost...

  11. Transgenesis procedures in Xenopus

    Science.gov (United States)

    Chesneau, Albert; Sachs, Laurent M.; Chai, Norin; Chen, Yonglong; Pasquier, Louis Du; Loeber, Jana; Pollet, Nicolas; Reilly, Michael; Weeks, Daniel L.; Bronchain, Odile J.

    2010-01-01

    Stable integration of foreign DNA into the frog genome has been the purpose of several studies aimed at generating transgenic animals or producing mutations of endogenous genes. Inserting DNA into a host genome can be achieved in a number of ways. In Xenopus, different strategies have been developed which exhibit specific molecular and technical features. Although several of these technologies were also applied in various model organizms, the attributes of each method have rarely been experimentally compared. Investigators are thus confronted with a difficult choice to discriminate which method would be best suited for their applications. To gain better understanding, a transgenesis workshop was organized by the X-omics consortium. Three procedures were assessed side-by-side, and the results obtained are used to illustrate this review. In addition, a number of reagents and tools have been set up for the purpose of gene expression and functional gene analyses. This not only improves the status of Xenopus as a powerful model for developmental studies, but also renders it suitable for sophisticated genetic approaches. Twenty years after the first reported transgenic Xenopus, we review the state of the art of transgenic research, focusing on the new perspectives in performing genetic studies in this species. PMID:18699776

  12. Some series of intuitionistic fuzzy interactive averaging aggregation operators.

    Science.gov (United States)

    Garg, Harish

    2016-01-01

    In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail.

  13. The procedural egalitarian solution

    NARCIS (Netherlands)

    Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud

    2017-01-01

    In this paper we introduce and analyze the procedural egalitarian solution for transferable utility games. This new concept is based on the result of a coalitional bargaining procedure in which egalitarian considerations play a central role. The procedural egalitarian solution is the first

  14. The Procedural Egalitarian Solution

    NARCIS (Netherlands)

    Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud

    2016-01-01

    In this paper we introduce and analyze the procedural egalitarian solution for transferable utility games. This new concept is based on the result of a coalitional bargaining procedure in which egalitarian considerations play a central role. The procedural egalitarian solution is the first

  15. Applied survival analysis using R

    CERN Document Server

    Moore, Dirk F

    2016-01-01

    Applied Survival Analysis Using R covers the main principles of survival analysis, gives examples of how it is applied, and teaches how to put those principles to use to analyze data using R as a vehicle. Survival data, where the primary outcome is time to a specific event, arise in many areas of biomedical research, including clinical trials, epidemiological studies, and studies of animals. Many survival methods are extensions of techniques used in linear regression and categorical data, while other aspects of this field are unique to survival data. This text employs numerous actual examples to illustrate survival curve estimation, comparison of survivals of different groups, proper accounting for censoring and truncation, model variable selection, and residual analysis. Because explaining survival analysis requires more advanced mathematics than many other statistical topics, this book is organized with basic concepts and most frequently used procedures covered in earlier chapters, with more advanced topics...

  16. A Derivation of the Nonlocal Volume-Averaged Equations for Two-Phase Flow Transport

    Directory of Open Access Journals (Sweden)

    Gilberto Espinosa-Paredes

    2012-01-01

    Full Text Available In this paper a detailed derivation of the general transport equations for two-phase systems using a method based on nonlocal volume averaging is presented. The local volume averaging equations are commonly applied in nuclear reactor system for optimal design and safe operation. Unfortunately, these equations are limited to length-scale restriction and according with the theory of the averaging volume method, these fail in transition of the flow patterns and boundaries between two-phase flow and solid, which produce rapid changes in the physical properties and void fraction. The non-local volume averaging equations derived in this work contain new terms related with non-local transport effects due to accumulation, convection diffusion and transport properties for two-phase flow; for instance, they can be applied in the boundary between a two-phase flow and a solid phase, or in the boundary of the transition region of two-phase flows where the local volume averaging equations fail.

  17. The average concentrations of 226Ra and 210Pb in foodstuff cultivated in the Pocos de Caldas plateau

    International Nuclear Information System (INIS)

    Hollanda Vasconcellos, L.M. de.

    1984-01-01

    The average concentrations of 226 Ra and 210 Pb in vegetables cultivated in the Pocos de Caldas plateau, mainly potatoes, carrots, beans and corn and the estimation of the average transfer factors soil-foodstuff for both radionuclides, were performed. The total 226 Ra and 210 Pb content in the soil was determined by gamma spectrometry. The exchangeable fraction was obtained by the classical radon emanation procedure and the 210 Pb was isolated by a radiochemical procedure and determined by radiometry of its daughter 210 Bi beta emissions with a Geiger Muller Counter. (M.A.C.) [pt

  18. 應用虛擬團隊於數位媒體設計之溝通策略與合作流程 A Study of Applying Virtual Team to the Communication Strategy and Procedure for Collaboration in a Digital Media Design Project

    OpenAIRE

    Wei-Ru Chen; Han-yun Yang; Chaoyun Chaucer Liang

    2003-01-01

    藉由資訊傳播科技的應用,數位媒體設計得以虛擬團隊的合作方式,整合分散各地不同領域的專業人才,共同完成設計任務。本研究針對業界目前應用虛擬合作之數位媒體設計團隊進行個案訪談,探討設計團隊如何以虛擬合作之方式來進行設計活動,歸納出虛擬團隊的溝通策略與合作流程,並分析數位媒體設計團隊進行虛擬合作之優劣勢。研究結果顯示,運用虛擬團隊之合作方式確有其需求,然其必要性與效益則應考量三個主要面向:(一)團隊建置的目標與成員架構,(二)團隊連結所使用的工具與溝通資訊,以及(三)團隊設計任務與虛擬合作流程。The development of information communication technology enables a digital media design project to apply virtual team to its communication strategy and procedure for collaboration. This study discussed the needs for building a virtual team of digital ...

  19. Developing policies and procedures.

    Science.gov (United States)

    Randolph, Susan A

    2006-11-01

    The development of policies and procedures is an integral part of the occupational health nurse's role. Policies and procedures serve as the foundation for the occupational health service and are based on its vision, mission, culture, and values. The design and layout selected for the policies and procedures should be simple, consistent, and easy to use. The same format should be used for all existing and new policies and procedures. Policies and procedures should be reviewed periodically based on a specified time frame (i.e., annually). However, some policies may require a more frequent review if they involve rapidly changing external standards, ethical issues, or emerging exposures.

  20. Bounds on Average Time Complexity of Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.

  1. Alternatives to national average income data as eligibility criteria for international subsidies: a social justice perspective.

    Science.gov (United States)

    Shebaya, Sirine; Sutherland, Andrea; Levine, Orin; Faden, Ruth

    2010-12-01

    Current strategies to address global inequities in access to life-saving vaccines use averaged national income data to determine eligibility. While largely successful in the lowest income countries, we argue that this approach could lead to significant inefficiencies from the standpoint of justice if applied to middle-income countries, where income inequalities are large and lead to national averages that obscure truly needy populations. Instead, we suggest alternative indicators more sensitive to social justice concerns that merit consideration by policy-makers developing new initiatives to redress health inequities in middle-income countries. © 2009 Blackwell Publishing Ltd.

  2. 40 CFR 80.205 - How is the annual refinery or importer average and corporate pool average sulfur level determined?

    Science.gov (United States)

    2010-07-01

    ... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...

  3. Journal of applied mathematics

    National Research Council Canada - National Science Library

    2001-01-01

    "[The] Journal of Applied Mathematics is a refereed journal devoted to the publication of original research papers and review articles in all areas of applied, computational, and industrial mathematics...

  4. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Science.gov (United States)

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  5. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    Directory of Open Access Journals (Sweden)

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  6. Procedure generation and verification

    International Nuclear Information System (INIS)

    Sheely, W.F.

    1986-01-01

    The Department of Energy has used Artificial Intelligence of ''AI'' concepts to develop two powerful new computer-based techniques to enhance safety in nuclear applications. The Procedure Generation System, and the Procedure Verification System, can be adapted to other commercial applications, such as a manufacturing plant. The Procedure Generation System can create a procedure to deal with the off-normal condition. The operator can then take correct actions on the system in minimal time. The Verification System evaluates the logic of the Procedure Generator's conclusions. This evaluation uses logic techniques totally independent of the Procedure Generator. The rapid, accurate generation and verification of corrective procedures can greatly reduce the human error, possible in a complex (stressful/high stress) situation

  7. Average action for the N-component φ4 theory

    International Nuclear Information System (INIS)

    Ringwald, A.; Wetterich, C.

    1990-01-01

    The average action is a continuum version of the block spin action in lattice field theories. We compute the one-loop approximation to the average potential for the N-component φ 4 theory in the spontaneously broken phase. For a finite (linear) block size ∝ anti k -1 this potential is real and nonconvex. For small φ the average potential is quadratic, U k =-1/2anti k 2 φ 2 , and independent of the original mass parameter and quartic coupling constant. It approaches the convex effective potential as anti k vanishes. (orig.)

  8. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  9. Salecker-Wigner-Peres clock and average tunneling times

    Energy Technology Data Exchange (ETDEWEB)

    Lunardi, Jose T., E-mail: jttlunardi@uepg.b [Departamento de Matematica e Estatistica, Universidade Estadual de Ponta Grossa, Av. General Carlos Cavalcanti, 4748. Cep 84030-000, Ponta Grossa, PR (Brazil); Manzoni, Luiz A., E-mail: manzoni@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States); Nystrom, Andrew T., E-mail: atnystro@cord.ed [Department of Physics, Concordia College, 901 8th St. S., Moorhead, MN 56562 (United States)

    2011-01-17

    The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).

  10. Averaging underwater noise levels for environmental assessment of shipping.

    Science.gov (United States)

    Merchant, Nathan D; Blondel, Philippe; Dakin, D Tom; Dorocicz, John

    2012-10-01

    Rising underwater noise levels from shipping have raised concerns regarding chronic impacts to marine fauna. However, there is a lack of consensus over how to average local shipping noise levels for environmental impact assessment. This paper addresses this issue using 110 days of continuous data recorded in the Strait of Georgia, Canada. Probability densities of ~10(7) 1-s samples in selected 1/3 octave bands were approximately stationary across one-month subsamples. Median and mode levels varied with averaging time. Mean sound pressure levels averaged in linear space, though susceptible to strong bias from outliers, are most relevant to cumulative impact assessment metrics.

  11. The asymptotic average-shadowing property and transitivity for flows

    International Nuclear Information System (INIS)

    Gu Rongbao

    2009-01-01

    The asymptotic average-shadowing property is introduced for flows and the relationships between this property and transitivity for flows are investigated. It is shown that a flow on a compact metric space is chain transitive if it has positively (or negatively) asymptotic average-shadowing property and a positively (resp. negatively) Lyapunov stable flow is positively (resp. negatively) topologically transitive provided it has positively (resp. negatively) asymptotic average-shadowing property. Furthermore, two conditions for which a flow is a minimal flow are obtained.

  12. Readability of Invasive Procedure Consent Forms.

    Science.gov (United States)

    Eltorai, Adam E M; Naqvi, Syed S; Ghanian, Soha; Eberson, Craig P; Weiss, Arnold-Peter C; Born, Christopher T; Daniels, Alan H

    2015-12-01

    Informed consent is a pillar of ethical medicine which requires patients to fully comprehend relevant issues including the risks, benefits, and alternatives of an intervention. Given the average reading skill of US adults is at the 8th grade level, the American Medical Association (AMA) and the National Institutes of Health (NIH) recommend patient information materials should not exceed a 6th grade reading level. We hypothesized that text provided in invasive procedure consent forms would exceed recommended readability guidelines for medical information. To test this hypothesis, we gathered procedure consent forms from all surgical inpatient hospitals in the state of Rhode Island. For each consent form, readability analysis was measured with the following measures: Flesch Reading Ease Formula, Flesch-Kincaid Grade Level, Fog Scale, SMOG Index, Coleman-Liau Index, Automated Readability Index, and Linsear Write Formula. These readability scores were used to calculate a composite Text Readability Consensus Grade Level. Invasive procedure consent forms were found to be written at an average of 15th grade level (i.e., third year of college), which is significantly higher than the average US adult reading level of 8th grade (p < 0.0001) and the AMA/NIH recommended readability guidelines for patient materials of 6th grade (p < 0.0001). Invasive procedure consent forms have readability levels which makes comprehension difficult or impossible for many patients. Efforts to improve the readability of procedural consent forms should improve patient understanding regarding their healthcare decisions. © 2015 Wiley Periodicals, Inc.

  13. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  14. on the performance of Autoregressive Moving Average Polynomial

    African Journals Online (AJOL)

    Timothy Ademakinwa

    Using numerical example, DL, PDL, ARPDL and. ARMAPDL models were fitted. Autoregressive Moving Average Polynomial Distributed Lag Model. (ARMAPDL) model performed better than the other models. Keywords: Distributed Lag Model, Selection Criterion, Parameter Estimation, Residual Variance. ABSTRACT. 247.

  15. Safety Impact of Average Speed Control in the UK

    DEFF Research Database (Denmark)

    Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert

    2016-01-01

    There is considerable safety potential in ensuring that motorists respect the speed limits. High speeds increase the number and severity of accidents. Technological development over the last 20 years has enabled the development of systems that allow automatic speed control. The first generation...... or section control. This article discusses the different methods for automatic speed control and presents an evaluation of the safety effects of average speed control, documented through changes in speed levels and accidents before and after the implementation of average speed control at selected sites...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....

  16. High Average Power Fiber Laser for Satellite Communications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...

  17. Medicare Part B Drug Average Sales Pricing Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...

  18. Averaging of diffusing contaminant concentrations in atmosphere surface layer

    International Nuclear Information System (INIS)

    Ivanov, E.A.; Ramzina, T.V.

    1985-01-01

    Calculations permitting to average concentration fields of diffusing radioactive contaminant coming from the NPP exhaust stack in the atmospheric surface layer are given. Formulae of contaminant concentration field calculation are presented; it depends on the average wind direction value (THETA) for time(T) and stability of this direction (σsub(tgTHETA) or σsub(THETA)). Probability of wind direction deviation from the average value for time T is satisfactory described by the Gauss law. With instability increase in the atmosphere σ increases, when wind velocity increasing the values of σ decreases for all types of temperature gradients. Nonuniformity of σ value dependence on averaging time T is underlined, that requires accurate choice of σsub(tgTHETA) and σsub(THETA) parameters in calculations

  19. Time averaging, ageing and delay analysis of financial time series

    Science.gov (United States)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  20. GIS Tools to Estimate Average Annual Daily Traffic

    Science.gov (United States)

    2012-06-01

    This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...

  1. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  2. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  3. Average geodesic distance of skeleton networks of Sierpinski tetrahedron

    Science.gov (United States)

    Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao

    2018-04-01

    The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.

  4. Annual average equivalent dose of workers form health area

    International Nuclear Information System (INIS)

    Daltro, T.F.L.; Campos, L.L.

    1992-01-01

    The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)

  5. The average action for scalar fields near phase transitions

    International Nuclear Information System (INIS)

    Wetterich, C.

    1991-08-01

    We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)

  6. Bounds on the Average Sensitivity of Nested Canalizing Functions

    OpenAIRE

    Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen

    2012-01-01

    Nested canalizing Boolean (NCF) functions play an important role in biological motivated regulative networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity for NCFs as a function of the number of relevant input vari...

  7. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  8. A precise measurement of the average b hadron lifetime

    CERN Document Server

    Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G

    1996-01-01

    An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.

  9. Demonstration of a Model Averaging Capability in FRAMES

    Science.gov (United States)

    Meyer, P. D.; Castleton, K. J.

    2009-12-01

    Uncertainty in model structure can be incorporated in risk assessment using multiple alternative models and model averaging. To facilitate application of this approach to regulatory applications based on risk or dose assessment, a model averaging capability was integrated with the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) version 2 software. FRAMES is a software platform that allows the non-parochial communication between disparate models, databases, and other frameworks. Users have the ability to implement and select environmental models for specific risk assessment and management problems. Standards are implemented so that models produce information that is readable by other downstream models and accept information from upstream models. Models can be linked across multiple media and from source terms to quantitative risk/dose estimates. Parameter sensitivity and uncertainty analysis tools are integrated. A model averaging module was implemented to accept output from multiple models and produce average results. These results can be deterministic quantities or probability distributions obtained from an analysis of parameter uncertainty. Output from alternative models is averaged using weights determined from user input and/or model calibration results. A model calibration module based on the PEST code was implemented to provide FRAMES with a general calibration capability. An application illustrates the implementation, user interfaces, execution, and results of the FRAMES model averaging capabilities.

  10. Essays in Applied Microeconomics

    Science.gov (United States)

    Severnini, Edson Roberto

    This dissertation consists of three studies analyzing causes and consequences of location decisions by economic agents in the U.S. In Chapter 1, I address the longstanding question of the extent to which the geographic clustering of economic activity may be attributable to agglomeration spillovers as opposed to natural advantages. I present evidence on this question using data on the long-run effects of large scale hydroelectric dams built in the U.S. over the 20th century, obtained through a unique comparison between counties with or without dams but with similar hydropower potential. Until mid-century, the availability of cheap local power from hydroelectric dams conveyed an important advantage that attracted industry and population. By the 1950s, however, these advantages were attenuated by improvements in the efficiency of thermal power generation and the advent of high tension transmission lines. Using a novel combination of synthetic control methods and event-study techniques, I show that, on average, dams built before 1950 had substantial short run effects on local population and employment growth, whereas those built after 1950 had no such effects. Moreover, the impact of pre-1950 dams persisted and continued to grow after the advantages of cheap local hydroelectricity were attenuated, suggesting the presence of important agglomeration spillovers. Over a 50 year horizon, I estimate that at least one half of the long run effect of pre-1950 dams is due to spillovers. The estimated short and long run effects are highly robust to alternative procedures for selecting synthetic controls, to controls for confounding factors such as proximity to transportation networks, and to alternative sample restrictions, such as dropping dams built by the Tennessee Valley Authority or removing control counties with environmental regulations. I also find small local agglomeration effects from smaller dam projects, and small spillovers to nearby locations from large dams. Lastly

  11. Transferability of hydrological models and ensemble averaging methods between contrasting climatic periods

    Science.gov (United States)

    Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor

    2016-10-01

    Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.

  12. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    Science.gov (United States)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  13. 75 FR 56472 - Amendments to Enforceable Consent Agreement Procedural Rules

    Science.gov (United States)

    2010-09-16

    ... Amendments to Enforceable Consent Agreement Procedural Rules AGENCY: Environmental Protection Agency (EPA... provide procedural safeguards equivalent to those that apply where testing is conducted by rule. B. What... comments indicated support for the ECA procedural changes, and had a few specific suggestions: Comment 1...

  14. Civil Procedure In Denmark

    DEFF Research Database (Denmark)

    Werlauff, Erik

    The book contains an up-to-date survey of Danish civil procedure after the profound Danish procedural reforms in 2007. It deals with questions concerning competence and function of Danish courts, commencement and preparation of civil cases, questions of evidence and burden of proof, international...... procedural questions, including relations to the Brussels I Regulation and Denmark's participation in this Regulation via a parallel convention with the EU countries, impact on Danish civil procedure of the convention on human rights, preparation and pronouncement of judgment and verdict, questions of appeal...... scientific activities conducted by the author, partly based on the author's experience as a member, through a number of years, of the Danish Standing Committee on Procedural Law (Retsplejeraadet), which on a continuous basis evaluates the need for civil procedural reforms in Denmark, and finally also based...

  15. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  16. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  17. An extended car-following model accounting for the average headway effect in intelligent transportation system

    Science.gov (United States)

    Kuang, Hua; Xu, Zhi-Peng; Li, Xing-Li; Lo, Siu-Ming

    2017-04-01

    In this paper, an extended car-following model is proposed to simulate traffic flow by considering average headway of preceding vehicles group in intelligent transportation systems environment. The stability condition of this model is obtained by using the linear stability analysis. The phase diagram can be divided into three regions classified as the stable, the metastable and the unstable ones. The theoretical result shows that the average headway plays an important role in improving the stabilization of traffic system. The mKdV equation near the critical point is derived to describe the evolution properties of traffic density waves by applying the reductive perturbation method. Furthermore, through the simulation of space-time evolution of the vehicle headway, it is shown that the traffic jam can be suppressed efficiently with taking into account the average headway effect, and the analytical result is consistent with the simulation one.

  18. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  19. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  20. Radon and radon daughters indoors, problems in the determination of the annual average

    International Nuclear Information System (INIS)

    Swedjemark, G.A.

    1984-01-01

    The annual average of the concentration of radon and radon daughters in indoor air is required both in studies such as determining the collective dose to a population and at comparing with limits. Measurements are often carried out during a time period shorter than a year for practical reasons. Methods for estimating the uncertainties due to temporal variations in an annual average calculated from measurements carried out during various lengths of the sampling periods. These methods have been applied to the results from long-term measurements of radon-222 in a few houses. The possibilities to use correction factors in order to get a more adequate annual average have also been studied and some examples have been given. (orig.)

  1. Application of structured illumination to gas phase thermometry using thermographic phosphor particles: a study for averaged imaging

    Science.gov (United States)

    Zentgraf, Florian; Stephan, Michael; Berrocal, Edouard; Albert, Barbara; Böhm, Benjamin; Dreizler, Andreas

    2017-07-01

    Structured laser illumination planar imaging (SLIPI) is combined with gas phase thermometry measurements using thermographic phosphor (TGP) particles. The technique is applied to a heated jet surrounded by a coflow which is operated at ambient temperature. The respective air flows are seeded with a powder of BaMgAl10O17:Eu2+ (BAM) which is used as temperature-sensitive gas phase tracer. Upon pulsed excitation in the ultraviolet spectral range, the temperature is extracted based on the two-color ratio method combined with SLIPI. The main advantage of applying the SLIPI approach to phosphor thermometry is the reduction of particle-to-particle multiple light scattering and diffuse wall reflections, yielding a more robust calibration procedure as well as improving the measurement accuracy, precision, and sensitivity. For demonstration, this paper focuses on sample-averaged measurements of temperature fields in a jet-in-coflow configuration. Using the conventional approach, which in contrast to SLIPI is based on imaging with an unmodulated laser light sheet, we show that for the present setup typically 40% of the recorded signal is affected by the contribution of multiply scattered photons. At locations close to walls even up to 75% of the apparent signal is due to diffuse reflection and wall luminescence of BAM sticking at the surface. Those contributions lead to erroneous temperature fields. Using SLIPI, an unbiased two-color ratio field is recovered allowing for two-dimensional mean temperature reconstructions which exhibit a more realistic physical behavior. This is in contrast to results deduced by the conventional approach. Furthermore, using the SLIPI approach it is shown that the temperature sensitivity is enhanced by a factor of up to 2 at 270 °C. Finally, an outlook towards instantaneous SLIPI phosphorescence thermometry is provided.

  2. 40 CFR 98.345 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills § 98.345 Procedures... data period. (b) For missing gas flow rates, the substitute data value shall be the arithmetic average...

  3. Magnetic fusion energy. Disaster operation procedures

    International Nuclear Information System (INIS)

    1986-06-01

    In a major disaster such as an earthquake, toxic chemical release, or fire, these Disaster Operations Procedures can be used, in combination with good judgment, to minimize the risk of injury to personnel and of property damage in our laboratory, shop, and office areas. These emergency procedures apply to all personnel working within MFE/Zone-11 area including visitors, program contract personnel, and construction contract personnel

  4. Averaged model to study long-term dynamics of a probe about Mercury

    Science.gov (United States)

    Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena

    2018-02-01

    This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen

  5. Advances in Applied Mechanics

    OpenAIRE

    2013-01-01

    Advances in Applied Mechanics draws together recent significant advances in various topics in applied mechanics. Published since 1948, Advances in Applied Mechanics aims to provide authoritative review articles on topics in the mechanical sciences, primarily of interest to scientists and engineers working in the various branches of mechanics, but also of interest to the many who use the results of investigations in mechanics in various application areas, such as aerospace, chemical, civil, en...

  6. Perspectives on Applied Ethics

    OpenAIRE

    2007-01-01

    Applied ethics is a growing, interdisciplinary field dealing with ethical problems in different areas of society. It includes for instance social and political ethics, computer ethics, medical ethics, bioethics, envi-ronmental ethics, business ethics, and it also relates to different forms of professional ethics. From the perspective of ethics, applied ethics is a specialisation in one area of ethics. From the perspective of social practice applying eth-ics is to focus on ethical aspects and ...

  7. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  8. The state support of small and average entrepreneurship in Ukraine

    Directory of Open Access Journals (Sweden)

    Т.О. Melnyk

    2015-03-01

    Full Text Available Purposes, principles and the basic directions of a state policy in development of small and average business in Ukraine are defined. Conditions and restrictions in granting of the state support to subjects of small and average business are outlined. The modern infrastructure of business support by regions is considered. Different kinds of the state support of small and average business are characterized: financial, information, consulting, in sphere of innovations, science and industrial production, subjects who conduct export activity, in sphere of preparation, retraining and improvement of professional skill of administrative and business dealing stuff. Approaches to reforming the state control of small and average business are generalized, esp. in aspects of risk degree estimation of economic activities, quantity and frequency of checks, registration of certificates which are made by the results of planned state control actions, creation of the effective mechanism of the state control bodies coordination. The most perspective directions of the state support of small and average business in Ukraine in modern economic conditions are defined.

  9. A Primer on Disseminating Applied Quantitative Research

    Science.gov (United States)

    Bell, Bethany A.; DiStefano, Christine; Morgan, Grant B.

    2010-01-01

    Transparency and replication are essential features of scientific inquiry, yet scientific communications of applied quantitative research are often lacking in much-needed procedural information. In an effort to promote researchers dissemination of their quantitative studies in a cohesive, detailed, and informative manner, the authors delineate…

  10. Applied Neuroscience Laboratory Complex

    Data.gov (United States)

    Federal Laboratory Consortium — Located at WPAFB, Ohio, the Applied Neuroscience lab researches and develops technologies to optimize Airmen individual and team performance across all AF domains....

  11. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    International Nuclear Information System (INIS)

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  12. Focused information criterion and model averaging based on weighted composite quantile regression

    KAUST Repository

    Xu, Ganggang

    2013-08-13

    We study the focused information criterion and frequentist model averaging and their application to post-model-selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non-parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR-estimator of a focused parameter is a non-linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non-normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure. © 2013 Board of the Foundation of the Scandinavian Journal of Statistics..

  13. Object detection by correlation coefficients using azimuthally averaged reference projections.

    Science.gov (United States)

    Nicholson, William V

    2004-11-01

    A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.

  14. The average orbit system upgrade for the Brookhaven AGS

    International Nuclear Information System (INIS)

    Ciardullo, D.J.; Brennan, J.M.

    1995-01-01

    The flexibility of the AGS to accelerate protons, polarized protons and heavy ions requires average orbit instrumentation capable of performing over a wide range of beam intensity (10 9 to 6 x 10 13 charges) and accelerating frequency (1.7MHz to 4.5MHz). In addition, the system must be tolerant of dramatic changes in bunch shape, such as those occurring near transition. Reliability and maintenance issues preclude the use of active electronics within the high radiation environment of the AGS tunnel, prompting the use of remote bunch signal processing. The upgrade for the AGS Average Orbit system is divided into three areas: (1) a new Pick Up Electrode (PUE) signal delivery system; (2) new average orbit processing electronics; and (3) centralized peripheral and data acquisition hardware. A distributed processing architecture was chosen to minimize the PUE signal cable lengths, the group of four from each detector location being phase matched to within ±5 degree

  15. The Health Effects of Income Inequality: Averages and Disparities.

    Science.gov (United States)

    Truesdale, Beth C; Jencks, Christopher

    2016-01-01

    Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.

  16. Bounds on the average sensitivity of nested canalizing functions.

    Directory of Open Access Journals (Sweden)

    Johannes Georg Klotz

    Full Text Available Nested canalizing Boolean functions (NCF play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.

  17. Bounds on the average sensitivity of nested canalizing functions.

    Science.gov (United States)

    Klotz, Johannes Georg; Heckel, Reinhard; Schober, Steffen

    2013-01-01

    Nested canalizing Boolean functions (NCF) play an important role in biologically motivated regulatory networks and in signal processing, in particular describing stack filters. It has been conjectured that NCFs have a stabilizing effect on the network dynamics. It is well known that the average sensitivity plays a central role for the stability of (random) Boolean networks. Here we provide a tight upper bound on the average sensitivity of NCFs as a function of the number of relevant input variables. As conjectured in literature this bound is smaller than 4/3. This shows that a large number of functions appearing in biological networks belong to a class that has low average sensitivity, which is even close to a tight lower bound.

  18. Specification of optical components for a high average-power laser environment

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.; Chow, R.; Rinmdahl, K.A.; Willis, J.B.; Wong, J.N.

    1997-06-25

    Optical component specifications for the high-average-power lasers and transport system used in the Atomic Vapor Laser Isotope Separation (AVLIS) plant must address demanding system performance requirements. The need for high performance optics has to be balanced against the practical desire to reduce the supply risks of cost and schedule. This is addressed in optical system design, careful planning with the optical industry, demonstration of plant quality parts, qualification of optical suppliers and processes, comprehensive procedures for evaluation and test, and a plan for corrective action.

  19. On time-dependent perturbation theory in matrix mechanics and time averaging

    International Nuclear Information System (INIS)

    Casas, Fernando

    2015-01-01

    The time-dependent quantum perturbation theory developed by Born, Heisenberg and Jordan in 1926 is revisited. We show that it not only reproduces the standard theory formulated in the interaction picture, but also allows one to construct more accurate approximations if time averaging techniques are employed. The theory can be rendered unitary even if the expansion is truncated by using a transformation previously suggested by Heisenberg. We illustrate the main features of the procedure on a simple example which clearly shows its advantages in comparison with the standard perturbation theory. (paper)

  20. Assignment Procedure Biases in Randomised Policy Experiments

    DEFF Research Database (Denmark)

    Aldashev, Gani; Kirchsteiger, Georg; Sebald, Alexander

    2017-01-01

    's propensity to act reciprocally. When people are motivated by reciprocity, the choice of assignment procedure influences the RCTs’ findings. We show that even credible and explicit randomisation procedures do not guarantee an unbiased prediction of the impact of policy interventions; however, they minimise......Randomised controlled trials (RCT) have gained ground as the dominant tool for studying policy interventions in many fields of applied economics. We analyse theoretically encouragement and resentful demoralisation in RCTs and show that these might be rooted in the same behavioural trait – people...... any bias relative to other less transparent assignment procedures....

  1. The Role of the Harmonic Vector Average in Motion Integration

    Directory of Open Access Journals (Sweden)

    Alan eJohnston

    2013-10-01

    Full Text Available The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC solution. Here a new combination rule, the harmonic vector average (HVA, is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The harmonic vector average however provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the intersection of constraints direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the harmonic vector average.

  2. Procedure for the measurement and processing of data in transverse computer tomography apparatus and computer tomography apparatus with averages for the execution of the procedure

    International Nuclear Information System (INIS)

    Lindquist, T.R.

    1978-01-01

    The radiation source and detectors in a computed tomography system translate and/or rotate with varying velocity profiles. Radiation transmission data is measured and sampled at a high rate, the sample points being equally spaced in the time domain. The data samples are then smoothed and interpolated using a high order polynomial fit, to provide input signals for an image reconstruction algorithm which are representative of transmission values at points which are evenly distributed in space. (Auth.)

  3. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  4. Bounce-averaged Fokker-Planck code for stellarator transport

    International Nuclear Information System (INIS)

    Mynick, H.E.; Hitchon, W.N.G.

    1985-07-01

    A computer code for solving the bounce-averaged Fokker-Planck equation appropriate to stellarator transport has been developed, and its first applications made. The code is much faster than the bounce-averaged Monte-Carlo codes, which up to now have provided the most efficient numerical means for studying stellarator transport. Moreover, because the connection to analytic kinetic theory of the Fokker-Planck approach is more direct than for the Monte-Carlo approach, a comparison of theory and numerical experiment is now possible at a considerably more detailed level than previously

  5. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economics Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  6. Non-self-averaging nucleation rate due to quenched disorder

    International Nuclear Information System (INIS)

    Sear, Richard P

    2012-01-01

    We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)

  7. What is 'applied' in 'applied' psychoanalysis?

    Science.gov (United States)

    Esman, A H

    1998-08-01

    The 'application' of psychoanalytic concepts and methods to the products of culture has occupied a somewhat ambiguous position, seen by some as a secondary, derivative, even dubious procedure, by others as a valuable and legitimate extrapolation of the basic principles. This paper argues that such 'applications' were integral to the early development of the field and that, indeed, many of Freud's basic ideas were derived from non-clinical (i.e. cultural) sources. The continuing impact of cultural forces on clinical concepts can be seen in the recent reformulations of our views on the psychology of women. Psychoanalysis is to be seen, therefore, as a constantly evolving system of propositions and hypotheses that are capable of 'application' and study in both clinical and extra-clinical settings. It is further argued that the continued development--even survival--of psychoanalysis requires the integration of its institutions and training facilities into the university system, permitting the free exchange of ideas across disciplines and a flexible educational structure that will encourage much-needed training in research as well as clinical methods. A brief illustration of the value of a psychoanalytic approach to the understanding of a specific work of art (Man Ray's painting 'Les Amoureux') is provided.

  8. Bariatric Surgery Procedures

    Science.gov (United States)

    ... Procedures Who is a Candidate for Bariatric Surgery? Childhood and Adolescent Obesity Find a Provider Benefits of Bariatric Surgery Life ... Bariatric Surgery FAQs Bariatric Surgery Procedures BMI Calculator Childhood and Adolescent Obesity 100 SW 75th Street, Suite 201, Gainesville, FL, ...

  9. The metric geometric mean transference and the problem of the average eye

    Directory of Open Access Journals (Sweden)

    W. F. Harris

    2008-12-01

    Full Text Available An average refractive error is readily obtained as an arithmetic average of refractive errors.  But how does one characterize the first-order optical character of an average eye?  Solutions have been offered including via the exponential-mean-log transference.  The exponential-mean-log transference ap-pears to work well in practice but there is the niggling problem that the method does not work with all optical systems.  Ideally one would like to be able to calculate an average for eyes in exactly the same way for all optical systems. This paper examines the potential of a relatively newly described mean, the metric geometric mean of positive definite (and, therefore, symmetric matrices.  We extend the definition of the metric geometric mean to matrices that are not symmetric and then apply it to ray transferences of optical systems.  The metric geometric mean of two transferences is shown to satisfy the requirement that symplecticity be pre-served.  Numerical examples show that the mean seems to give a reasonable average for two eyes.  Unfortunately, however, what seem reasonable generalizations to the mean of more than two eyes turn out not to be satisfactory in general.  These generalizations do work well for thin systems.  One concludes that, unless other generalizations can be found, the metric geometric mean suffers from more disadvantages than the exponential-mean-logarithm and has no advantages over it.

  10. Reforming Russian Civil Procedur

    Directory of Open Access Journals (Sweden)

    Dmitry Maleshin

    2016-01-01

    Full Text Available The II Annual Symposium of the journal Herald of Civil Procedure ‘2015: The Civil Procedure 2.0: Reform and Current State’ took place on October 9, 2015, at the Faculty of Law of Kazan (Volga region Federal University.The Symposium is now an established tradition for the University. In 2015 it brought together in Kazan eminent scholars of civil procedure from cities across the whole of Russia: Moscow, St. Petersburg, Saratov, Ekaterinburg, Omsk, Samara, Nizhnekamsk and others. This large-scale event attracted the attention not only of Russian scholars, but also of legal scholars from abroad: Elisabetta Silvestri (Professor, University of Pavia, Italy, William B. Simons (Professor, University of Tartu, Estonia, Jaroslav Turlukovsky (Professor, Warsaw University, Poland, Stuart H. Schultz (Practising Attorney, USA, Irina Izarova (Associate Professor, Taras Shevchenko National University of Kyiv, Ukraine.The opening ceremony of the Symposium began with greetings to all participants and best wishes for productive discussions. Participants were welcomed with remarks by Marat Khairullin, Deputy Chair of the Supreme Court of the Republic of Tatarstan, Radik Ilyasov, Head of the Federal Bailiff Service of the Republic of Tatarstan, and Ildar Tarkhanov, Academic Supervisor at the Faculty of Law. They expressed their appreciation for the great value of the journal Herald of Civil Procedure in the growth of the science of civil procedure and enforcement procedure, and for its contributions to the development of the judicial system of the Russian Federation.In addition to hearing prepared reports and discussing viewpoints on current issues of civil and arbitration procedure, participants attended presentations by representatives from procedural law periodicals in the frame of the Symposium. The Editor-in-Chief of Herald of Civil Procedure, Damir Valeev, and the Commercial Director of the Statut Publishing House (Moscow, Kirill Samoilov, presented new

  11. Aromatherapy for reducing colonoscopy related procedural anxiety and physiological parameters: a randomized controlled study.

    Science.gov (United States)

    Hu, Pei-Hsin; Peng, Yen-Chun; Lin, Yu-Ting; Chang, Chi-Sen; Ou, Ming-Chiu

    2010-01-01

    Colonoscopy is generally tolerated, some patients regarding the procedure as unpleasant and painful and generally performed with the patient sedated and receiving analgesics. The effect of sedation and analgesia for colonoscopy is limited. Aromatherapy is also applied to gastrointestinal endoscopy to reduce procedural anxiety. There is lack of information about aromatherapy specific for colonoscopy. In this study, we aimed to performed a randomized controlled study to investigate the effect of aromatherapy on relieve anxiety, stress and physiological parameters of colonoscopy. A randomized controlled trail was carried out and collected in 2009 and 2010. The participants were randomized in two groups. Aromatherapy was then carried out by inhalation of Sunflower oil (control group) and Neroli oil (Experimental group). The anxiety index was evaluated by State Trait Anxiety Inventory-state (STAI-S) score before aromatherapy and after colonoscopy as well as the pain index for post-procedural by visual analogue scale (VAS). Physiological indicators, such as blood pressure (systolic and diastolic blood pressure), heart rate and respiratory rate were evaluated before and after aromatherapy. Participates in this study were 27 subjects, 13 in control group and 14 in Neroli group with average age 52.26 +/- 17.79 years. There was no significance of procedural anxiety by STAI-S score and procedural pain by VAS. The physiological parameters showed a significant lower pre- and post-procedural systolic blood pressure in Neroli group than control group. Aromatic care for colonoscopy, although with no significant effect on procedural anxiety, is an inexpensive, effective and safe pre-procedural technique that could decrease systolic blood pressure.

  12. 49 CFR 1111.9 - Procedural schedule in cases using simplified standards.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Procedural schedule in cases using simplified... PROCEDURES § 1111.9 Procedural schedule in cases using simplified standards. (a) Procedural schedule. Absent a specific order by the Board, the following general procedural schedules will apply in cases using...

  13. Play vs. Procedures

    DEFF Research Database (Denmark)

    Hammar, Emil

    Through the theories of play by Gadamer (2004) and Henricks (2006), I will show how the relationship between play and game can be understood as dialectic and disruptive, thus challenging understandings of how the procedures of games determine player activity and vice versa. As such, I posit some...... analytical consequences for understandings of digital games as procedurally fixed (Boghost, 2006; Flannagan, 2009; Bathwaite & Sharp, 2010). That is, if digital games are argued to be procedurally fixed and if play is an appropriative and dialectic activity, then it could be argued that the latter affects...... and alters the former, and vice versa. Consequently, if the appointed procedures of a game are no longer fixed and rigid in their conveyance of meaning, qua the appropriative and dissolving nature of play, then understandings of games as conveying a fixed meaning through their procedures are inadequate...

  14. Estimation of average bioburden values on flexible gastrointestinal ...

    African Journals Online (AJOL)

    Medhat Mohammed Anwar Hamed

    2014-06-21

    Jun 21, 2014 ... between models from the same manufacturer. However, all flexible endoscopes have the same basic components. Infections related to flexible endoscopic procedures are caused by either endogenous flora or exogenous microbes. The first major challenge of reprocessing is infection control, most epi-.

  15. Effect of temporal averaging of meteorological data on predictions of groundwater recharge

    Directory of Open Access Journals (Sweden)

    Batalha Marcia S.

    2018-06-01

    Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.

  16. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    Science.gov (United States)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  17. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  18. 26 CFR 1.1301-1 - Averaging of farm income.

    Science.gov (United States)

    2010-04-01

    ... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...

  19. Error estimates in horocycle averages asymptotics: challenges from string theory

    NARCIS (Netherlands)

    Cardella, M.A.

    2010-01-01

    For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth

  20. Average weighted receiving time in recursive weighted Koch networks

    Indian Academy of Sciences (India)

    https://www.ias.ac.in/article/fulltext/pram/086/06/1173-1182. Keywords. Weighted Koch network; recursive division method; average weighted receiving time. Abstract. Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created ...

  1. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  2. Average Distance Travelled To School by Primary and Secondary ...

    African Journals Online (AJOL)

    This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...

  3. Determination of the average lifetime of bottom hadrons

    International Nuclear Information System (INIS)

    Althoff, M.; Braunschweig, W.; Kirschfink, F.J.; Martyn, H.U.; Rosskamp, P.; Schmitz, D.; Siebke, H.; Wallraff, W.; Hilger, E.; Kracht, T.; Krasemann, H.L.; Leu, P.; Lohrmann, E.; Pandoulas, D.; Poelz, G.; Poesnecker, K.U.; Duchovni, E.; Eisenberg, Y.; Karshon, U.; Mikenberg, G.; Mir, R.; Revel, D.; Shapira, A.; Baranko, G.; Caldwell, A.; Cherney, M.; Izen, J.M.; Mermikides, M.; Ritz, S.; Rudolph, G.; Strom, D.; Takashima, M.; Venkataramania, H.; Wicklund, E.; Wu, S.L.; Zobernig, G.

    1984-01-01

    We have determined the average lifetime of hadrons containing b quarks produced in e + e - annihilation to be tausub(B)=1.83x10 -12 s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI)

  4. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  5. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  6. Trend of Average Wages as Indicator of Hypothetical Money Illusion

    Directory of Open Access Journals (Sweden)

    Julian Daszkowski

    2010-06-01

    Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.

  7. Implications of Methodist clergies' average lifespan and missional ...

    African Journals Online (AJOL)

    We are born, we touch the lives of others, we die – and then we are remembered. For the purpose of this article, I have assessed from obituaries the average lifespan of the clergy (ministers) in the Methodist Church of South Africa (MCSA), who died between 2003 and 2014. These obituaries were published in the ...

  8. High Average Power UV Free Electron Laser Experiments At JLAB

    International Nuclear Information System (INIS)

    Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn

    2012-01-01

    Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.

  9. Investigation of average daily water consumption and its impact on ...

    African Journals Online (AJOL)

    Investigation of average daily water consumption and its impact on weight gain in captive common buzzards ( Buteo buteo ) in Greece. ... At the end of 24 hours, the left over water was carefully brought out and re-measured to determine the quantity the birds have consumed. A control was set with a ceramic bowl with same ...

  10. proposed average values of some engineering properties of palm ...

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Coefficient of sliding friction of palm ker- nels. Gbadamosi [2] determined the coefficient of sliding friction of palm kernels using a bottomless four-sided container on adjustable tilting surface of plywood, gal- vanized steel, and glass. The average values were 0.38,. 0.45, and 0.44 for dura, tenera, and pisifera ...

  11. Speckle averaging system for laser raster-scan image projection

    Science.gov (United States)

    Tiszauer, Detlev H.; Hackel, Lloyd A.

    1998-03-17

    The viewers' perception of laser speckle in a laser-scanned image projection system is modified or eliminated by the addition of an optical deflection system that effectively presents a new speckle realization at each point on the viewing screen to each viewer for every scan across the field. The speckle averaging is accomplished without introduction of spurious imaging artifacts.

  12. Moving average rules as a source of market instability

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets

  13. On the average-case complexity of Shellsort

    NARCIS (Netherlands)

    Vitányi, P.

    We prove a lower bound expressed in the increment sequence on the average-case complexity of the number of inversions of Shellsort. This lower bound is sharp in every case where it could be checked. A special case of this lower bound yields the general Jiang-Li-Vitányi lower bound. We obtain new

  14. Environmental stresses can alleviate the average deleterious effect of mutations

    Directory of Open Access Journals (Sweden)

    Leibler Stanislas

    2003-05-01

    Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.

  15. Crystallographic extraction and averaging of data from small image areas

    NARCIS (Netherlands)

    Perkins, GA; Downing, KH; Glaeser, RM

    The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that

  16. Establishment of Average Body Measurement and the Development ...

    African Journals Online (AJOL)

    cce

    there is a change in their shape as well as in their size. This growth according to Aldrich ... Establishment of Average Body Measurement and the Development of Block Patterns for Pre-School Children. Igbo, C. A. (Ph.D). 62 ..... Poverty according to Igbo (2002) is one of the reasons for food insecurity. Inaccessibility and ...

  17. Maximum and average field strength in enclosed environments

    NARCIS (Netherlands)

    Leferink, Frank Bernardus Johannes

    2010-01-01

    Electromagnetic fields in large enclosed environments are reflected many times and cannot be predicted anymore using conventional models. The common approach is to compare such environments with highly reflecting reverberation chambers. The average field strength can easily be predicted using the

  18. arXiv Averaged Energy Conditions and Bouncing Universes

    CERN Document Server

    Giovannini, Massimo

    2017-11-16

    The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.

  19. Comparing averaging limits for social cues over space and time.

    Science.gov (United States)

    Florey, Joseph; Dakin, Steven C; Mareschal, Isabelle

    2017-08-01

    Observers are able to extract summary statistics from groups of faces, such as their mean emotion or identity. This can be done for faces presented simultaneously and also from sequences of faces presented at a fixed location. Equivalent noise analysis, which estimates an observer's internal noise (the uncertainty in judging a single element) and effective sample size (ESS; the effective number of elements being used to judge the average), reveals what limits an observer's averaging performance. It has recently been shown that observers have lower ESSs and higher internal noise for judging the mean gaze direction of a group of spatially distributed faces compared to the mean head direction of the same faces. In this study, we use the equivalent noise technique to compare limits on these two cues to social attention under two presentation conditions: spatially distributed and sequentially presented. We find that the differences in ESS are replicated in spatial arrays but disappear when both cue types are averaged over time, suggesting that limited peripheral gaze perception prevents accurate averaging performance. Correlation analysis across participants revealed generic limits for internal noise that may act across stimulus and presentation types, but no clear shared limits for ESS. This result supports the idea of some shared neural mechanisms b in early stages of visual processing.

  20. Modeling of Sokoto Daily Average Temperature: A Fractional ...

    African Journals Online (AJOL)

    Modeling of Sokoto Daily Average Temperature: A Fractional Integration Approach. 22 extension of the class of ARIMA processes stemming from Box and Jenkins methodology. One of their originalities is the explicit modeling of the long term correlation structure (Diebolt and. Guiraud, 2000). Autoregressive fractionally.

  1. Accuracy of averaged auditory brainstem response amplitude and latency estimates

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; M. Harte, James; Elberling, Claus

    2017-01-01

    Objective: The aims were to 1) establish which of the four algorithms for estimating residual noise level and signal-to-noise ratio (SNR) in auditory brainstem responses (ABRs) perform better in terms of post-average wave-V peak latency and amplitude errors and 2) determine whether SNR or noise...

  2. Domain-averaged Fermi-hole Analysis for Solids

    Czech Academy of Sciences Publication Activity Database

    Baranov, A.; Ponec, Robert; Kohout, M.

    2012-01-01

    Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  3. Significance of power average of sinusoidal and non-sinusoidal ...

    Indian Academy of Sciences (India)

    2016-06-08

    Jun 8, 2016 ... Corresponding author. E-mail: venkatesh.sprv@gmail.com ... of the total power average technique, one can say whether the chaos in that nonlinear system is to be supppressed or not. Keywords. Chaos; controlling .... the instantaneous values of power taken during one complete cycle T and is given as.

  4. 94 GHz High-Average-Power Broadband Amplifier

    National Research Council Canada - National Science Library

    Luhmann, Neville

    2003-01-01

    A state-of-the-art gyro-TWT amplifier operating in the low loss TE01 mode has been developed with the objective of producing an average power of 140 kW in the W-Band with a predicted efficiency of 28%, 50dB gain, and 5% bandwidth...

  5. Climate Prediction Center (CPC) Zonally Average 500 MB Temperature Anomalies

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is one of the CPC?s Monthly Atmospheric and SST Indices. It is the 500-hPa temperature anomalies averaged over the latitude band 20oN ? 20oS. The anomalies are...

  6. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  7. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    guaranteed convergence with this simple algorithm. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. PACS Nos 89.75.Hc; 89.75.Fb; 89.20.Ff. 1. Introduction. Wireless sensor networks are increasingly used in many applications ranging from envi- ronmental to ...

  8. 40 CFR 63.150 - Emissions averaging provisions.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Organic Hazardous Air Pollutants From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.150 Emissions averaging...

  9. Calculation of average landslide frequency using climatic records

    Science.gov (United States)

    L. M. Reid

    1998-01-01

    Abstract - Aerial photographs are used to develop a relationship between the number of debris slides generated during a hydrologic event and the size of the event, and the long-term average debris-slide frequency is calculated from climate records using the relation.

  10. Grade Point Average: What's Wrong and What's the Alternative?

    Science.gov (United States)

    Soh, Kay Cheng

    2011-01-01

    Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…

  11. The Effect of Honors Courses on Grade Point Averages

    Science.gov (United States)

    Spisak, Art L.; Squires, Suzanne Carter

    2016-01-01

    High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…

  12. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  13. 75 FR 5170 - Railroad Cost Recovery Procedures-Productivity Adjustment

    Science.gov (United States)

    2010-02-01

    ...)] Railroad Cost Recovery Procedures--Productivity Adjustment AGENCY: Surface Transportation Board, DOT. ACTION: Proposed Railroad Cost Recovery Procedures Productivity Adjustment. SUMMARY: In a decision served... railroad productivity for the 2004-2008 (5-year) averaging period. This is a decline of 0.5 of a percentage...

  14. High average power diode pumped solid state lasers for CALIOPE

    International Nuclear Information System (INIS)

    Comaskey, B.; Halpin, J.; Moran, B.

    1994-07-01

    Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers

  15. Numerical simulation in applied geophysics

    CERN Document Server

    Santos, Juan Enrique

    2016-01-01

    This book presents the theory of waves propagation in a fluid-saturated porous medium (a Biot medium) and its application in Applied Geophysics. In particular, a derivation of absorbing boundary conditions in viscoelastic and poroelastic media is presented, which later is employed in the applications. The partial differential equations describing the propagation of waves in Biot media are solved using the Finite Element Method (FEM). Waves propagating in a Biot medium suffer attenuation and dispersion effects. In particular the fast compressional and shear waves are converted to slow diffusion-type waves at mesoscopic-scale heterogeneities (on the order of centimeters), effect usually occurring in the seismic range of frequencies. In some cases, a Biot medium presents a dense set of fractures oriented in preference directions. When the average distance between fractures is much smaller than the wavelengths of the travelling fast compressional and shear waves, the medium behaves as an effective viscoelastic an...

  16. Reforming Russian civil Procedure

    OpenAIRE

    MALESHIN DMITRY; SILVESTRI ELISABETTA; SITDIKOV RUSLAN; VALEEV DAMIR

    2016-01-01

    The II Annual symposium of the journal Herald of Civil Procedure ‘2015: the Civil Procedure 2.0: reform and Current state’took place on october 9, 2015, at the Faculty of Law of kazan (Volga region) Federal university. the symposium is now an established tradition for the university. In 2015 it brought together in kazan eminent scholars of civil procedure from cities across the whole of russia: Moscow, st. Petersburg, saratov, Ekaterinburg, omsk, samara, Nizhnekamsk and others. this large-sca...

  17. Reforming Russian Civil Procedur

    OpenAIRE

    Dmitry Maleshin; Elisabetta Silvestri; Ruslan Sitgikov; Damir Valeev

    2016-01-01

    The II Annual Symposium of the journal Herald of Civil Procedure ‘2015: The Civil Procedure 2.0: Reform and Current State’ took place on October 9, 2015, at the Faculty of Law of Kazan (Volga region) Federal University.The Symposium is now an established tradition for the University. In 2015 it brought together in Kazan eminent scholars of civil procedure from cities across the whole of Russia: Moscow, St. Petersburg, Saratov, Ekaterinburg, Omsk, Samara, Nizhnekamsk and others. This large-sca...

  18. Law of procedure

    International Nuclear Information System (INIS)

    Witt, S. de.

    1984-01-01

    The real protection of fundamental rights of the population does not only depend on the substantive concretization in the atomic energy law but also on its procedural shaping. The more the citizens are burdened by governmental decisions the more decidedly it is requested not only by the principle of democracy, but also by the principle of law, that the parties concerned participate intensively in the procedure. In this second contribution De Witt describes the atomic energy licensing procedure and compares it with this claim. (orig./HSCH) [de

  19. Applied eye tracking research

    NARCIS (Netherlands)

    Jarodzka, Halszka

    2011-01-01

    Jarodzka, H. (2010, 12 November). Applied eye tracking research. Presentation and Labtour for Vereniging Gewone Leden in oprichting (VGL i.o.), Heerlen, The Netherlands: Open University of the Netherlands.

  20. Applied Mathematics Seminar 1982

    International Nuclear Information System (INIS)

    1983-01-01

    This report contains the abstracts of the lectures delivered at 1982 Applied Mathematics Seminar of the DPD/LCC/CNPq and Colloquy on Applied Mathematics of LCC/CNPq. The Seminar comprised 36 conferences. Among these, 30 were presented by researchers associated to brazilian institutions, 9 of them to the LCC/CNPq, and the other 6 were given by visiting lecturers according to the following distribution: 4 from the USA, 1 from England and 1 from Venezuela. The 1981 Applied Mathematics Seminar was organized by Leon R. Sinay and Nelson do Valle Silva. The Colloquy on Applied Mathematics was held from october 1982 on, being organized by Ricardo S. Kubrusly and Leon R. Sinay. (Author) [pt

  1. Applied Learning Networks (ALN)

    National Research Council Canada - National Science Library

    Bannister, Joseph; Shen, Wei-Min; Touch, Joseph; Hou, Feili; Pingali, Venkata

    2007-01-01

    Applied Learning Networks (ALN) demonstrates that a network protocol can learn to improve its performance over time, showing how to incorporate learning methods into a general class of network protocols...

  2. Mesothelioma Applied Research Foundation

    Science.gov (United States)

    Mesothelioma Foundation Experts Can Answer Your Questions! The Mesothelioma Applied Research Foundation's team of experts is available ... up for our e-newsletter . Our Impact Against Mesothelioma 9.8 Million in research funded 600 People ...

  3. Vibrational Averaging of the Isotropic Hyperfine Coupling Constants for the Methyl Radical

    Science.gov (United States)

    Adam, Ahmad; Jensen, Per; Yachmenev, Andrey; Yurchenko, Sergei N.

    2014-06-01

    Electronic contributions to molecular properties are often considered as the major factor and usually reported in the literature without ro-vibrational corrections. However, there are many cases where the nuclear motion contributions are significant and even larger than the electronic contribution. In order to obtain accurate theoretical predictions, nuclear motion effects on molecular properties need to be taken into account. The computed isotropic hyperfine coupling constants for the nonvibrating methyl radical CH_3 are far from the experimental values. For CH_3, we have calculated the vibrational-state-dependence of the isotropic hyperfine coupling constant in the electronic ground state. The vibrational wavefunctions used in the averaging procedure were obtained variationally with the TROVE program. Analytical representations for the potential energy surfaces and the hyperfine coupling constant surfaces are obtained in least-squares fitting procedures. Thermal averaging has been carried out for molecules in thermal equilibrium, i.e., with Boltzmann-distributed populations. The calculation methods and the results will be discussed in detail.

  4. Hybrid Large-Eddy/Reynolds-Averaged Simulation of a Supersonic Cavity Using VULCAN

    Science.gov (United States)

    Quinlan, Jesse; McDaniel, James; Baurle, Robert A.

    2013-01-01

    Simulations of a supersonic recessed-cavity flow are performed using a hybrid large-eddy/Reynolds-averaged simulation approach utilizing an inflow turbulence recycling procedure and hybridized inviscid flux scheme. Calorically perfect air enters a three-dimensional domain at a free stream Mach number of 2.92. Simulations are performed to assess grid sensitivity of the solution, efficacy of the turbulence recycling, and the effect of the shock sensor used with the hybridized inviscid flux scheme. Analysis of the turbulent boundary layer upstream of the rearward-facing step for each case indicates excellent agreement with theoretical predictions. Mean velocity and pressure results are compared to Reynolds-averaged simulations and experimental data for each case and indicate good agreement on the finest grid. Simulations are repeated on a coarsened grid, and results indicate strong grid density sensitivity. Simulations are performed with and without inflow turbulence recycling on the coarse grid to isolate the effect of the recycling procedure, which is demonstrably critical to capturing the relevant shear layer dynamics. Shock sensor formulations of Ducros and Larsson are found to predict mean flow statistics equally well.

  5. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  6. Computer and Applied Ethics

    OpenAIRE

    越智, 貢

    2014-01-01

    With this essay I treat some problems raised by the new developments in science and technology, that is, those about Computer Ethics to show how and how far Applied Ethics differs from traditional ethics. I take up backgrounds on which Computer Ethics rests, particularly historical conditions of morality. Differences of conditions in time and space explain how Computer Ethics and Applied Ethics are not any traditional ethics in concrete cases. But I also investigate the normative rea...

  7. Is average daily travel time expenditure constant? In search of explanations for an increase in average travel time.

    NARCIS (Netherlands)

    van Wee, B.; Rietveld, P.; Meurs, H.

    2006-01-01

    Recent research suggests that the average time spent travelling by the Dutch population has increased over the past decades. However, different data sources show different levels of increase. This paper explores possible causes for this increase. They include a rise in incomes, which has probably

  8. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... section; or (ii) For alcohol-fueled model types, the fuel economy value calculated for that model type in...) For alcohol dual fuel model types, for model years 1993 through 2019, the harmonic average of the... combined model type fuel economy value for operation on alcohol fuel as determined in § 600.208-12(b)(5)(ii...

  9. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  10. Multiple-scale stochastic processes: Decimation, averaging and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)

    2017-02-07

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  11. Soil Sampling Operating Procedure

    Science.gov (United States)

    EPA Region 4 Science and Ecosystem Support Division (SESD) document that describes general and specific procedures, methods, and considerations when collecting soil samples for field screening or laboratory analysis.

  12. Cardiac Procedures and Surgeries

    Science.gov (United States)

    ... the Procedure Does A stent is a wire mesh tube used to prop open an artery during ... a Heart Attack • Heart Attack Tools & Resources • Support Network Heart Attack Tools & Resources My Cardiac Coach What ...

  13. Cosmetic Procedure Questions

    Science.gov (United States)

    ... for Every Season How to Choose the Best Skin Care Products In This Section Dermatologic Surgery What is dermatologic ... for Every Season How to Choose the Best Skin Care Products Cosmetic Procedure Questions Want to look younger? Start ...

  14. Dynamic alarm response procedures

    International Nuclear Information System (INIS)

    Martin, J.; Gordon, P.; Fitch, K.

    2006-01-01

    The Dynamic Alarm Response Procedure (DARP) system provides a robust, Web-based alternative to existing hard-copy alarm response procedures. This paperless system improves performance by eliminating time wasted looking up paper procedures by number, looking up plant process values and equipment and component status at graphical display or panels, and maintenance of the procedures. Because it is a Web-based system, it is platform independent. DARP's can be served from any Web server that supports CGI scripting, such as Apache R , IIS R , TclHTTPD, and others. DARP pages can be viewed in any Web browser that supports Javascript and Scalable Vector Graphics (SVG), such as Netscape R , Microsoft Internet Explorer R , Mozilla Firefox R , Opera R , and others. (authors)

  15. Assisted Medical Procedures (AMP)

    Data.gov (United States)

    National Aeronautics and Space Administration — DOCUMENTATION, DEVELOPMENT, AND PROGRESS The AMP was initially being developed as part the Advanced Integrated Clinical System (AICS)-Guided Medical Procedure System...

  16. Alternative Refractive Surgery Procedures

    Science.gov (United States)

    ... the epithelial cells. Once the epithelial flap is created and moved aside, the procedure is the same ... Sites EyeWiki International Society of Refractive Surgery * Required * First Name: * Last Name: Member ID: * Phone Number: * Email: * ...

  17. EML procedures manual

    International Nuclear Information System (INIS)

    Volchok, H.L.; de Planque, G.

    1982-01-01

    This manual contains the procedures that are used currently by the Environmental Measurements Laboratory of the US Department of Energy. In addition a number of analytical methods from other laboratories have been included. These were tested for reliability at the Battelle, Pacific Northwest Laboratory under contract with the Division of Biomedical and Environmental Research of the AEC. These methods are clearly distinguished. The manual is prepared in loose leaf form to facilitate revision of the procedures and inclusion of additional procedures or data sheets. Anyone receiving the manual through EML should receive this additional material automatically. The contents are as follows: (1) general; (2) sampling; (3) field measurements; (4) general analytical chemistry; (5) chemical procedures; (6) data section; (7) specifications

  18. Canalith Repositioning Procedure

    Science.gov (United States)

    ... Overview The canalith repositioning procedure can help relieve benign paroxysmal positional vertigo (BPPV), a condition in which you have brief, but ... the inner ear responsible for balance (vestibular labyrinth). BPPV occurs when tiny particles called otoconia in one ...

  19. SEASONAL AVERAGE FLOW IN RÂUL NEGRU HYDROGRAPHIC BASIN

    Directory of Open Access Journals (Sweden)

    VIGH MELINDA

    2015-03-01

    Full Text Available The Râul Negru hydrographic basin is a well individualised physical-geographical unit inside the Braşov Depression. The flow is controlled by six hydrometric stations placed on the main river and on two important tributaries. The data base for seasonal flow analysis contains the discharges from 1950-2012. The results of data analysis show that there significant space-time differences between multiannual seasonal averages. Some interesting conclusions can be obtained by comparing abundant and scarce periods. Flow analysis was made using seasonal charts Q = f(T. The similarities come from the basin’s relative homogeneity, and the differences from flow’s evolution and trend. Flow variation is analysed using variation coefficient. In some cases appear significant Cv values differences. Also, Cv values trends are analysed according to basins’ average altitude.

  20. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.