WorldWideScience

Sample records for stride estimation chiraz

  1. Mobile Stride Length Estimation With Deep Convolutional Neural Networks.

    Science.gov (United States)

    Hannink, Julius; Kautz, Thomas; Pasluosta, Cristian F; Barth, Jens; Schulein, Samuel; GaBmann, Karl-Gunter; Klucken, Jochen; Eskofier, Bjoern M

    2018-03-01

    Accurate estimation of spatial gait characteristics is critical to assess motor impairments resulting from neurological or musculoskeletal disease. Currently, however, methodological constraints limit clinical applicability of state-of-the-art double integration approaches to gait patterns with a clear zero-velocity phase. We describe a novel approach to stride length estimation that uses deep convolutional neural networks to map stride-specific inertial sensor data to the resulting stride length. The model is trained on a publicly available and clinically relevant benchmark dataset consisting of 1220 strides from 101 geriatric patients. Evaluation is done in a tenfold cross validation and for three different stride definitions. Even though best results are achieved with strides defined from midstance to midstance with average accuracy and precision of , performance does not strongly depend on stride definition. The achieved precision outperforms state-of-the-art methods evaluated on the same benchmark dataset by . Due to the independence of stride definition, the proposed method is not subject to the methodological constrains that limit applicability of state-of-the-art double integration methods. Furthermore, it was possible to improve precision on the benchmark dataset. With more precise mobile stride length estimation, new insights to the progression of neurological disease or early indications might be gained. Due to the independence of stride definition, previously uncharted diseases in terms of mobile gait analysis can now be investigated by retraining and applying the proposed method.

  2. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    Science.gov (United States)

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  4. Examination of the gait pattern based on adjusting and resulting components of the stride-to-stride variability

    DEFF Research Database (Denmark)

    Laessoe, Uffe; Jensen, Niels Martin Brix; Madeleine, Pascal

    2017-01-01

    Stride-to-stride variability may be used as an indicator in the assessment of gait performance, but the evaluation of this parameter is not trivial. In the gait pattern, a deviation in one stride must be corrected within the next strides (elemental variables) to ensure a steady gait (performance .......5 to 2 strides with 0.5 stride increments. The time lag values corresponded to the following contralateral stride, the following ipsilateral stride, the second following contralateral stride and the second following ipsilateral stride....

  5. Stride length: measuring its instantaneous value

    International Nuclear Information System (INIS)

    Campiglio, G C; Mazzeo, J R

    2007-01-01

    Human gait has been studied from different viewpoints: kinematics, dynamics, sensibility and others. Many of its characteristics still remain open to research, both for normal gait and for pathological gait. Objective measures of some of its most significant spatial/temporal parameters are important in this context. Stride length, one of these parameters, is defined as the distance between two consecutive contacts of one foot with ground. On this work we present a device designed to provide automatic measures of stride length. Its features make it particularly appropriate for the evaluation of pathological gait

  6. Optimal stride frequencies in running at different speeds.

    Directory of Open Access Journals (Sweden)

    Ben T van Oeveren

    Full Text Available During running at a constant speed, the optimal stride frequency (SF can be derived from the u-shaped relationship between SF and heart rate (HR. Changing SF towards the optimum of this relationship is beneficial for energy expenditure and may positively change biomechanics of running. In the current study, the effects of speed on the optimal SF and the nature of the u-shaped relation were empirically tested using Generalized Estimating Equations. To this end, HR was recorded from twelve healthy (4 males, 8 females inexperienced runners, who completed runs at three speeds. The three speeds were 90%, 100% and 110% of self-selected speed. A self-selected SF (SFself was determined for each of the speeds prior to the speed series. The speed series started with a free-chosen SF condition, followed by five imposed SF conditions (SFself, 70, 80, 90, 100 strides·min-1 assigned in random order. The conditions lasted 3 minutes with 2.5 minutes of walking in between. SFself increased significantly (p<0.05 with speed with averages of 77, 79, 80 strides·min-1 at 2.4, 2.6, 2.9 m·s-1, respectively. As expected, the relation between SF and HR could be described by a parabolic curve for all speeds. Speed did not significantly affect the curvature, nor did it affect optimal SF. We conclude that over the speed range tested, inexperienced runners may not need to adapt their SF to running speed. However, since SFself were lower than the SFopt of 83 strides·min-1, the runners could reduce HR by increasing their SFself.

  7. Stride rate and walking intensity in healthy older adults.

    Science.gov (United States)

    Peacock, Leslie; Hewitt, Allan; Rowe, David A; Sutherland, Rona

    2014-04-01

    The study investigated (a) walking intensity (stride rate and energy expenditure) under three speed instructions; (b) associations between stride rate, age, height, and walking intensity; and (c) synchronization between stride rate and music tempo during overground walking in a population of healthy older adults. Twenty-nine participants completed 3 treadmill-walking trials and 3 overground-walking trials at 3 self-selected speeds. Treadmill VO2 was measured using indirect calorimetry. Stride rate and music tempo were recorded during overground-walking trials. Mean stride rate exceeded minimum thresholds for moderate to vigorous physical activity (MVPA) under slow (111.41 ± 11.93), medium (118.17 ± 11.43), and fast (123.79 ± 11.61) instructions. A multilevel model showed that stride rate, age, and height have a significant effect (p Music can be a useful way to guide walking cadence.

  8. Persistent fluctuations in stride intervals under fractal auditory stimulation

    NARCIS (Netherlands)

    Marmelat, V.C.M.; Torre, K.; Beek, P.J.; Daffertshofer, A.

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may

  9. Stride time synergy in relation to walking during dual task

    DEFF Research Database (Denmark)

    Læssøe, Uffe; Madeleine, Pascal

    2012-01-01

    point of view elemental and performance variables may represent good and bad components of variability [2]. In this study we propose that the gait pattern can be seen as an on-going movement synergy in which each stride is corrected by the next stride (elemental variables) to ensure a steady gait...... (performance variable). AIM: The aim of this study was to evaluate stride time synergy and to identify good and bad stride variability in relation to walking during dual task. METHODS: Thirteen healthy young participants walked along a 2x5 meter figure-of-eight track at a self-selected comfortable speed...... with a positive slope going through the mean of the strides, and bad variance with respect to a similar line with a negative slope. The general variance coefficient (CV%) was also computed. The effect of introducing a concurrent cognitive task (dual task: counting backwards in sequences of 7) was evaluated...

  10. STRIDE: Species Tree Root Inference from Gene Duplication Events.

    Science.gov (United States)

    Emms, David M; Kelly, Steven

    2017-12-01

    The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. A Comparative Analysis of Selected Mechanical Aspects of the Ice Skating Stride.

    Science.gov (United States)

    Marino, G. Wayne

    This study quantitatively analyzes selected aspects of the skating strides of above-average and below-average ability skaters. Subproblems were to determine how stride length and stride rate are affected by changes in skating velocity, to ascertain whether the basic assumption that stride length accurately approximates horizontal movement of the…

  12. Knee Angle and Stride Length in Association with Ball Speed in Youth Baseball Pitchers

    Directory of Open Access Journals (Sweden)

    Bart van Trigt

    2018-05-01

    Full Text Available The purpose of this study was to determine whether stride length and knee angle of the leading leg at foot contact, at the instant of maximal external rotation of the shoulder, and at ball release are associated with ball speed in elite youth baseball pitchers. In this study, fifty-two elite youth baseball pitchers (mean age 15.2 SD (standard deviation 1.7 years pitched ten fastballs. Data were collected with three high-speed video cameras at a frequency of 240 Hz. Stride length and knee angle of the leading leg were calculated at foot contact, maximal external rotation, and ball release. The associations between these kinematic variables and ball speed were separately determined using generalized estimating equations. Stride length as percentage of body height and knee angle at foot contact were not significantly associated with ball speed. However, knee angles at maximal external rotation and ball release were significantly associated with ball speed. Ball speed increased by 0.45 m/s (1 mph with an increase in knee extension of 18 degrees at maximal external rotation and 19.5 degrees at ball release. In conclusion, more knee extension of the leading leg at maximal external rotation and ball release is associated with higher ball speeds in elite youth baseball pitchers.

  13. Optimal stride frequencies in running at different speeds

    NARCIS (Netherlands)

    Van Oeveren, Ben T.; De Ruiter, Cornelis J.; Beek, Peter J.; Van Dieën, Jaap H.

    2017-01-01

    During running at a constant speed, the optimal stride frequency (SF) can be derived from the u-shaped relationship between SF and heart rate (HR). Changing SF towards the optimum of this relationship is beneficial for energy expenditure and may positively change biomechanics of running. In the

  14. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Directory of Open Access Journals (Sweden)

    Vivien Marmelat

    Full Text Available Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  15. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.

  16. Impact of stride-coupled gaze shifts of walking blowflies on the neuronal representation of visual targets

    Directory of Open Access Journals (Sweden)

    Daniel eKress

    2014-09-01

    Full Text Available During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride

  17. Stride length: the impact on propulsion and bracing ground reaction force in overhand throwing.

    Science.gov (United States)

    Ramsey, Dan K; Crotin, Ryan L

    2018-03-26

    Propulsion and bracing ground reaction force (GRF) in overhand throwing are integral in propagating joint reaction kinetics and ball velocity, yet how stride length effects drive (hind) and stride (lead) leg GRF profiles remain unknown. Using a randomised crossover design, 19 pitchers (15 collegiate and 4 high school) were assigned to throw 2 simulated 80-pitch games at ±25% of their desired stride length. An integrated motion capture system with two force plates and radar gun tracked each throw. Vertical and anterior-posterior GRF was normalised then impulse was derived. Paired t-tests identified whether differences between conditions were significant. Late in single leg support, peak propulsion GRF was statistically greater for the drive leg with increased stride. Stride leg peak vertical GRF in braking occurred before acceleration with longer strides, but near ball release with shorter strides. Greater posterior shear GRF involving both legs demonstrated increased braking with longer strides. Conversely, decreased drive leg propulsion reduced both legs' braking effects with shorter strides. Results suggest an interconnection between normalised stride length and GRF application in propulsion and bracing. This work has shown stride length to be an important kinematic factor affecting the magnitude and timing of external forces acting upon the body.

  18. Selecting Therapeutic Targets in Inflammatory Bowel Disease (STRIDE)

    DEFF Research Database (Denmark)

    Peyrin-Biroulet, L; Sandborn, W; Sands, B E

    2015-01-01

    OBJECTIVES: The Selecting Therapeutic Targets in Inflammatory Bowel Disease (STRIDE) program was initiated by the International Organization for the Study of Inflammatory Bowel Diseases (IOIBD). It examined potential treatment targets for inflammatory bowel disease (IBD) to be used for a "treat-t...... target. CONCLUSIONS: Evidence- and consensus-based recommendations for selecting the goals for treat-to-target strategies in patients with IBD are made available. Prospective studies are needed to determine how these targets will change disease course and patients' quality of life....

  19. Preliminary evaluation of STRIDE programme in primary schools of Malaysia.

    Science.gov (United States)

    Hanjeet, K; Wan Rozita, W M; How, T B; Santhana Raj, L; Baharudin, Omar

    2007-12-01

    The Students' Resilience and Interpersonal Skills Development Education (STRIDE) is a preventive drug education programme. The rational of this programme is that preventive drug education has to begin early in age, before the development of social attitudes and behaviour of students. A pre and a post intervention surveys were performed to evaluate the impact of this programme. Nine schools from three states were identified to participate in the intervention. These schools were selected based on their locations in high-drug-use areas (where the prevalence of drug use exceeds 0.5% of the student population). The new intervention curriculum was put into practice for three months in the nine schools. The overall scores obtained by each respondent to assess their knowledge on drugs and its implications were analysed. The results showed that the programme made a positive impact from the pre to post intervention programme by using the Wilcoxon Signed Rank Test (p < 0.05). A high percentage of the questions showed significant evidence through the McNemar matched pair Chi-Squared test with Bonferonni correction that there were positive shifts in the answers by comparing the pre and post intervention results (p < 0.05). Recommendations have been discussed with the Ministry of Education to integrate this programme into the national primary school curriculum.

  20. Investigating the correlation between paediatric stride interval persistence and gross energy expenditure

    Directory of Open Access Journals (Sweden)

    Sejdić Ervin

    2010-02-01

    Full Text Available Abstract Background Stride interval persistence, a term used to describe the correlation structure of stride interval time series, is thought to provide insight into neuromotor control, though its exact clinical meaning has not yet been realized. Since human locomotion is shaped by energy efficient movements, it has been hypothesized that stride interval dynamics and energy expenditure may be inherently tied, both having demonstrated similar sensitivities to age, disease, and pace-constrained walking. Findings This study tested for correlations between stride interval persistence and measures of energy expenditure including mass-specific gross oxygen consumption per minute (, mass-specific gross oxygen cost per meter (VO2 and heart rate (HR. Metabolic and stride interval data were collected from 30 asymptomatic children who completed one 10-minute walking trial under each of the following conditions: (i overground walking, (ii hands-free treadmill walking, and (iii handrail-supported treadmill walking. Stride interval persistence was not significantly correlated with (p > 0.32, VO2 (p > 0.18 or HR (p > 0.56. Conclusions No simple linear dependence exists between stride interval persistence and measures of gross energy expenditure in asymptomatic children when walking overground and on a treadmill.

  1. Interaction effects of stride angle and strike pattern on running economy.

    Science.gov (United States)

    Santos-Concejero, J; Tam, N; Granados, C; Irazusta, J; Bidaurrazaga-Letona, I; Zabala-Lili, J; Gil, S M

    2014-12-01

    This study aimed to investigate the relationship between stride angle and running economy (RE) in athletes with different foot strike patterns. 30 male runners completed 4 min running stages on a treadmill at different velocities. During the test, biomechanical variables such as stride angle, swing time, contact time, stride length and frequency were recorded using an optical measurement system. Their foot strike pattern was determined, and VO2 at velocities below the lactate threshold were measured to calculate RE. Midfoot/forefoot strikers had better RE than rearfoot strikers (201.5±5.6 ml · kg(-1) · km(-1) vs. 213.5±4.2 ml · kg(-1) · km(-1)respectively; p=0.019). Additionally, midfoot/fore-foot strikers presented higher stride angles than rearfoot strikers (p=0.043). Linear modelling analysis showed that stride angle is closely related to RE (r=0.62, pstrike pattern is likely to be more economical, whereas at any lower degree, the midfoot/forefoot strike pattern appears to be more desirable. A biomechanical running technique characterised by high stride angles and a midfoot/forefoot strike pattern is advantageous for a better RE. Athletes may find stride angle useful for improving RE. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Recommended number of strides for automatic assessment of gait symmetry and regularity in above-knee amputees by means of accelerometry and autocorrelation analysis

    Directory of Open Access Journals (Sweden)

    Tura Andrea

    2012-02-01

    Full Text Available Abstract Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP and ten control subjects (CTRL were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice. Reference values of step and stride regularity indices (Ad1 and Ad2 were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals. At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees.

  3. Effects of changing the random number stride in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Hendricks, J.S.

    1991-01-01

    This paper reports on a common practice in Monte Carlo radiation transport codes which is to start each random walk a specified number of steps up the random number sequence from the previous one. This is called the stride in the random number sequence between source particles. It is used for correlated sampling or to provide tree-structured random numbers. A new random number generator algorithm for the major Monte Carlo code MCNP has been written to allow adjustment of the random number stride. This random number generator is machine portable. The effects of varying the stride for several sample problems are examined

  4. Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1995-01-01

    Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.

  5. Select injury-related variables are affected by stride length and foot strike style during running.

    Science.gov (United States)

    Boyer, Elizabeth R; Derrick, Timothy R

    2015-09-01

    Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables

  6. Effects of footwear and stride length on metatarsal strains and failure in running.

    Science.gov (United States)

    Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent

    2017-11-01

    The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; pRunning at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Gait variability and basal ganglia disorders: stride-to-stride variations of gait cycle timing in Parkinson's disease and Huntington's disease

    Science.gov (United States)

    Hausdorff, J. M.; Cudkowicz, M. E.; Firtion, R.; Wei, J. Y.; Goldberger, A. L.

    1998-01-01

    The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.

  8. Walking speed-related changes in stride time variability: effects of decreased speed

    Directory of Open Access Journals (Sweden)

    Dubost Veronique

    2009-08-01

    Full Text Available Abstract Background Conflicting results have been reported regarding the relationship between stride time variability (STV and walking speed. While some studies failed to establish any relationship, others reported either a linear or a non-linear relationship. We therefore sought to determine the extent to which decrease in self-selected walking speed influenced STV among healthy young adults. Methods The mean value, the standard deviation and the coefficient of variation of stride time, as well as the mean value of stride velocity were recorded while steady-state walking using the GAITRite® system in 29 healthy young adults who walked consecutively at 88%, 79%, 71%, 64%, 58%, 53%, 46% and 39% of their preferred walking speed. Results The decrease in stride velocity increased significantly mean values, SD and CoV of stride time (p Conclusion The results support the assumption that gait variability increases while walking speed decreases and, thus, gait might be more unstable when healthy subjects walk slower compared with their preferred walking speed. Furthermore, these results highlight that a decrease in walking speed can be a potential confounder while evaluating STV.

  9. Project Stride: An Equine-Assisted Intervention to Reduce Symptoms of Social Anxiety in Young Women.

    Science.gov (United States)

    Alfonso, Sarah V; Alfonso, Lauren A; Llabre, Maria M; Fernandez, M Isabel

    2015-01-01

    Although there is evidence supporting the use of equine-assisted activities to treat mental disorders, its efficacy in reducing signs and symptoms of social anxiety in young women has not been examined. We developed and pilot tested Project Stride, a brief, six-session intervention combining equine-assisted activities and cognitive-behavioral strategies to reduce symptoms of social anxiety. A total of 12 women, 18-29 years of age, were randomly assigned to Project Stride or a no-treatment control. Participants completed the Liebowitz Social Anxiety Scale at baseline, immediate-post, and 6 weeks after treatment. Project Stride was highly acceptable and feasible. Compared to control participants, those in Project Stride had significantly greater reductions in social anxiety scores from baseline to immediate-post [decrease of 24.8 points; t (9) = 3.40, P = .008)] and from baseline to follow-up [decrease of 31.8 points; t (9) = 4.12, P = .003)]. These findings support conducting a full-scale efficacy trial of Project Stride. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Possible biomechanical origins of the long-range correlations in stride intervals of walking

    Science.gov (United States)

    Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.

    2007-07-01

    When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.

  11. Association between stride time fractality and gait adaptability during unperturbed and asymmetric walking.

    Science.gov (United States)

    Ducharme, Scott W; Liddy, Joshua J; Haddad, Jeffrey M; Busa, Michael A; Claxton, Laura J; van Emmerik, Richard E A

    2018-04-01

    Human locomotion is an inherently complex activity that requires the coordination and control of neurophysiological and biomechanical degrees of freedom across various spatiotemporal scales. Locomotor patterns must constantly be altered in the face of changing environmental or task demands, such as heterogeneous terrains or obstacles. Variability in stride times occurring at short time scales (e.g., 5-10 strides) is statistically correlated to larger fluctuations occurring over longer time scales (e.g., 50-100 strides). This relationship, known as fractal dynamics, is thought to represent the adaptive capacity of the locomotor system. However, this has not been tested empirically. Thus, the purpose of this study was to determine if stride time fractality during steady state walking associated with the ability of individuals to adapt their gait patterns when locomotor speed and symmetry are altered. Fifteen healthy adults walked on a split-belt treadmill at preferred speed, half of preferred speed, and with one leg at preferred speed and the other at half speed (2:1 ratio asymmetric walking). The asymmetric belt speed condition induced gait asymmetries that required adaptation of locomotor patterns. The slow speed manipulation was chosen in order to determine the impact of gait speed on stride time fractal dynamics. Detrended fluctuation analysis was used to quantify the correlation structure, i.e., fractality, of stride times. Cross-correlation analysis was used to measure the deviation from intended anti-phasing between legs as a measure of gait adaptation. Results revealed no association between unperturbed walking fractal dynamics and gait adaptability performance. However, there was a quadratic relationship between perturbed, asymmetric walking fractal dynamics and adaptive performance during split-belt walking, whereby individuals who exhibited fractal scaling exponents that deviated from 1/f performed the poorest. Compared to steady state preferred walking

  12. Cicero's de legibus and Martin Luther King, Jr.’s Stride toward freedom

    Directory of Open Access Journals (Sweden)

    Boleslav s. Povšič

    1979-12-01

    Full Text Available He who reads carefully Cicero'sDe Legibus and Martin Luther King, Jr.'s Stride Toward Freedom is surprised to find, mutatis mutandis, on how many points these two great men agree. The historical circumstances are different, but the essential ideas are very similar. The purpose of this paper is to show on what precisely they agree and on what they differ.

  13. Stride-related rein tension patterns in walk and trot in the ridden horse.

    Science.gov (United States)

    Egenvall, Agneta; Roepstorff, Lars; Eisersiö, Marie; Rhodin, Marie; van Weeren, René

    2015-12-30

    The use of tack (equipment such as saddles and reins) and especially of bits because of rein tension resulting in pressure in the mouth is questioned because of welfare concerns. We hypothesised that rein tension patterns in walk and trot reflect general gait kinematics, but are also determined by individual horse and rider effects. Six professional riders rode three familiar horses in walk and trot. Horses were equipped with rein tension meters logged by inertial measurement unit technique. Left and right rein tension data were synchronized with the gait. Stride split data (0-100 %) were analysed using mixed models technique to elucidate the left/right rein and stride percentage interaction, in relation to the exercises performed. In walk, rein tension was highest at hindlimb stance. Rein tension was highest in the suspension phase at trot, and lowest during the stance phase. In rising trot there was a significant difference between the two midstance phases, but not in sitting trot. When turning in trot there was a significant statistical association with the gait pattern with the tension being highest in the inside rein when the horse was on the outer fore-inner hindlimb diagonal. Substantial between-rider variation was demonstrated in walk and trot and between-horse variation in walk. Biphasic rein tensions patterns during the stride were found mainly in trot.

  14. Stride dynamics, gait variability and prospective falls risk in active community dwelling older women.

    Science.gov (United States)

    Paterson, Kade; Hill, Keith; Lythgo, Noel

    2011-02-01

    Measures of walking instability such as stride dynamics and gait variability have been shown to identify future fallers in older adult populations with gait limitations or mobility disorders. This study investigated whether measures of walking instability can predict future fallers (over a prospective 12 month period) in a group of healthy and active older women. Ninety-seven healthy active women aged between 55 and 90 years walked for 7 min around a continuous walking circuit. Gait data recorded by a GAITRite(®) walkway and foot-mounted accelerometers were used to calculate measures of stride dynamics and gait variability. The participant's physical function and balance were assessed. Fall incidence was monitored over the following 12 months. Inter-limb differences (p≤0.04) in stride dynamics were found for fallers (one or more falls) aged over 70 years, and multiple fallers (two or more falls) aged over 55 years, but not in non-fallers or a combined group of single and non-fallers. No group differences were found in the measures of physical function, balance or gait, including variability. Additionally, no gait variable predicted falls. Reduced coordination of inter-limb dynamics was found in active healthy older fallers and multiple fallers despite no difference in other measures of intrinsic falls risk. Evaluating inter-limb dynamics may be a clinically sensitive technique to detect early gait instability and falls risk in high functioning older adults, prior to change in other measures of physical function, balance and gait. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. INS/EKF-based stride length, height and direction intent detection for walking assistance robots.

    Science.gov (United States)

    Brescianini, Dario; Jung, Jun-Young; Jang, In-Hun; Park, Hyun Sub; Riener, Robert

    2011-01-01

    We propose an algorithm used to obtain the information on stride length, height difference, and direction based on user's intent during walking. For exoskeleton robots used to assist paraplegic patients' walking, this information is used to generate gait patterns by themselves in on-line. To obtain this information, we attach an inertial measurement unit(IMU) on crutches and apply an extended kalman filter-based error correction method to reduce the phenomena of drift due to bias of the IMU. The proposed method is verifed in real walking scenarios including walking, climbing up-stairs, and changing direction of walking with normal. © 2011 IEEE

  16. Biomechanical characteristics of adults walking forward and backward in water at different stride frequencies.

    Science.gov (United States)

    Cadenas-Sánchez, Cristina; Arellano, Raúl; Taladriz, Sonia; López-Contreras, Gracia

    2016-01-01

    The aim of this study was to examine spatiotemporal characteristics and joint angles during forward and backward walking in water at low and high stride frequency. Eight healthy adults (22.1 ± 1.1 years) walked forward and backward underwater at low (50 pulses) and high frequency (80 pulses) at the xiphoid process level with arms crossed at the chest. The main differences observed were that the participants presented a greater speed (0.58 vs. 0.85 m/s) and more asymmetry of the step length (1.24 vs. 1.48) at high frequency whilst the stride and step length (0.84 vs. 0.7 m and 0.43 vs. 0.35 m, respectively) were lower compared to low frequency (P hip presented more flexion than during backward walking (ankle: 84.0 vs. 91.8º and hip: 22.8 vs. 8.0º; P hip were more flexed at low frequency than at high frequency (knee: 150.0 vs. 157.0º and hip: -12.2 vs. -14.5º; P water at different frequencies differ and contribute to a better understanding of this activity in training and rehabilitation.

  17. Manipulating the stride length/stride velocity relationship of walking using a treadmill and rhythmic auditory cueing in non-disabled older individuals. A short-term feasibility study.

    Science.gov (United States)

    Eikema, D J A; Forrester, L W; Whitall, J

    2014-09-01

    One target for rehabilitating locomotor disorders in older adults is to increase mobility by improving walking velocity. Combining rhythmic auditory cueing (RAC) and treadmill training permits the study of the stride length/stride velocity ratio (SL/SV), often reduced in those with mobility deficits. We investigated the use of RAC to increase velocity by manipulating the SL/SV ratio in older adults. Nine participants (6 female; age: 61.1 ± 8.8 years) walked overground on a gait mat at preferred and fast speeds. After acclimatization to comfortable speed on a treadmill, participants adjusted their cadence to match the cue for 3 min at 115% of preferred speed by either (a) increasing stride length only or (b) increasing stride frequency only. Following training, participants walked across the gait mat at preferred velocity without, and then with, RAC. Group analysis determined no immediate overground velocity increase, but reintroducing RAC did produce an increase in velocity after both conditions. Group and single subject analysis determined that the SL/SV ratio changed in the intended direction only in the stride length condition. We conclude that RAC is a powerful organizer of gait parameters, evidenced by its induced after-effects following short duration training. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A single hydrotherapy session increases range of motion and stride length in Labrador retrievers diagnosed with elbow dysplasia.

    Science.gov (United States)

    Preston, T; Wills, A P

    2018-04-01

    Canine elbow dysplasia is a debilitating condition of unknown aetiology and is a common cause of forelimb lameness in dogs. Canine hydrotherapy is a therapeutic approach rapidly increasing in popularity for the treatment of a range of musculoskeletal pathologies. In this study, kinematic analysis was used to assess the effect of a customised hydrotherapy session on the range of motion, stride length and stride frequency of healthy Labrador retrievers (n=6) and Labrador retrievers diagnosed with bilateral elbow dysplasia (n=6). Reflective kinematic markers were attached to bony anatomical landmarks and dogs were recorded walking at their preferred speed on a treadmill before and 10min after a single hydrotherapy session. Range of motion, stride length and stride frequency were calculated for both forelimbs. Data were analysed via a robust mixed ANOVA to assess the effect of hydrotherapy on the kinematic parameters of both groups. Range of motion was greater in the healthy dogs at baseline (PHydrotherapy increased the range of motion of the forelimbs of both groups (PHydrotherapy stride length (Phydrotherapy only in the left limb (Phydrotherapy as a therapeutic tool for the rehabilitation and treatment of Labradors with elbow dysplasia. Furthermore, results indicate that hydrotherapy might improve the gait and movement of healthy dogs. However, whether these results are transient or sustained remains undetermined. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Effect of treadmill versus overground running on the structure of variability of stride timing.

    Science.gov (United States)

    Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J

    2014-04-01

    Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.

  20. Effects of different frequencies of rhythmic auditory cueing on the stride length, cadence, and gait speed in healthy young females.

    Science.gov (United States)

    Yu, Lili; Zhang, Qi; Hu, Chunying; Huang, Qiuchen; Ye, Miao; Li, Desheng

    2015-02-01

    [Purpose] The aim of this study was to explore the effects of different frequencies of rhythmic auditory cueing (RAC) on stride length, cadence, and gait speed in healthy young females. The findings of this study might be used as clinical guidance of physical therapy for choosing the suitable frequency of RAC. [Subjects] Thirteen healthy young females were recruited in this study. [Methods] Ten meters walking tests were measured in all subjects under 4 conditions with each repeated 3 times and a 3-min seated rest period between repetitions. Subjects first walked as usual and then were asked to listen carefully to the rhythm of a metronome and walk with 3 kinds of RAC (90%, 100%, and 110% of the mean cadence). The three frequencies (90%, 100%, and 110%) of RAC were randomly assigned. Gait speed, stride length, and cadence were calculated, and a statistical analysis was performed using the SPSS (version 17.0) computer package. [Results] The gait speed and cadence of 90% RAC walking showed significant decreases compared with normal walking and 100% and 110% RAC walking. The stride length, cadence, and gait speed of 110% RAC walking showed significant increases compared with normal walking and 90% and 100% RAC walking. [Conclusion] Our results showed that 110% RAC was the best of the 3 cueing frequencies for improvement of stride length, cadence, and gait speed in healthy young females.

  1. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2016-01-01

    Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.

  2. Repeated sprint ability and stride kinematics are altered following an official match in national-level basketball players.

    Science.gov (United States)

    Delextrat, A; Baliqi, F; Clarke, N

    2013-04-01

    The aim of the study was to investigate the effects of playing an official national-level basketball match on repeated sprint ability (RSA) and stride kinematics. Nine male starting basketball players (22.8±2.2 years old, 191.3±5.8 cm, 88±10.3 kg, 12.3±4.6% body fat) volunteered to take part. Six repetitions of maximal 4-s sprints were performed on a non-motorised treadmill, separated by 21-s of passive recovery, before and immediately after playing an official match. Fluid loss, playing time, and the frequencies of the main match activities were recorded. The peak, mean, and performance decrement for average and maximal speed, acceleration, power, vertical and horizontal forces, and stride parameters were calculated over the six sprints. Differences between pre- and post-match were assessed by student t-tests. Significant differences between pre- and post-tests were observed in mean speed (-3.3%), peak and mean horizontal forces (-4.3% and -17.4%), peak and mean vertical forces (-3.4% and -3.7%), contact time (+7.3%), stride duration (+4.6%) and stride frequency (-4.0%), (Pvertical force were significantly correlated to fluid loss and sprint, jump and shuffle frequencies (P<0.05). These results highlight that the impairment in repeated sprint ability depends on the specific activities performed, and that replacing fluid loss through sweating during a match is crucial.

  3. Stimulant Reduction Intervention using Dosed Exercise (STRIDE - CTN 0037: Study protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Morris David W

    2011-09-01

    Full Text Available Abstract Background There is a need for novel approaches to the treatment of stimulant abuse and dependence. Clinical data examining the use of exercise as a treatment for the abuse of nicotine, alcohol, and other substances suggest that exercise may be a beneficial treatment for stimulant abuse, with direct effects on decreased use and craving. In addition, exercise has the potential to improve other health domains that may be adversely affected by stimulant use or its treatment, such as sleep disturbance, cognitive function, mood, weight gain, quality of life, and anhedonia, since it has been shown to improve many of these domains in a number of other clinical disorders. Furthermore, neurobiological evidence provides plausible mechanisms by which exercise could positively affect treatment outcomes. The current manuscript presents the rationale, design considerations, and study design of the National Institute on Drug Abuse (NIDA Clinical Trials Network (CTN CTN-0037 Stimulant Reduction Intervention using Dosed Exercise (STRIDE study. Methods/Design STRIDE is a multisite randomized clinical trial that compares exercise to health education as potential treatments for stimulant abuse or dependence. This study will evaluate individuals diagnosed with stimulant abuse or dependence who are receiving treatment in a residential setting. Three hundred and thirty eligible and interested participants who provide informed consent will be randomized to one of two treatment arms: Vigorous Intensity High Dose Exercise Augmentation (DEI or Health Education Intervention Augmentation (HEI. Both groups will receive TAU (i.e., usual care. The treatment arms are structured such that the quantity of visits is similar to allow for equivalent contact between groups. In both arms, participants will begin with supervised sessions 3 times per week during the 12-week acute phase of the study. Supervised sessions will be conducted as one-on-one (i.e., individual sessions

  4. Selective Breeding and Short-Term Access to a Running Wheel Alter Stride Characteristics in House Mice.

    Science.gov (United States)

    Claghorn, Gerald C; Thompson, Zoe; Kay, Jarren C; Ordonez, Genesis; Hampton, Thomas G; Garland, Theodore

    Postural and kinematic aspects of running may have evolved to support high runner (HR) mice to run approximately threefold farther than control mice. Mice from four replicate HR lines selectively bred for high levels of voluntary wheel running show many differences in locomotor behavior and morphology as compared with four nonselected control (C) lines. We hypothesized that HR mice would show stride alterations that have coadapted with locomotor behavior, morphology, and physiology. More specifically, we predicted that HR mice would have stride characteristics that differed from those of C mice in ways that parallel some of the adaptations seen in highly cursorial animals. For example, we predicted that limbs of HR mice would swing closer to the parasagittal plane, resulting in a two-dimensional measurement of narrowed stance width. We also expected that some differences between HR and C mice might be amplified by 6 d of wheel access, as is used to select breeders each generation. We used the DigiGait Imaging System (Mouse Specifics) to capture high-speed videos in ventral view as mice ran on a motorized treadmill across a range of speeds and then to automatically calculate several aspects of strides. Young adults of both sexes were tested both before and after 6 d of wheel access. Stride length, stride frequency, stance width, stance time, brake time, propel time, swing time, duty factor, and paw contact area were analyzed using a nested analysis of covariance, with body mass as a covariate. As expected, body mass and treadmill speed affected nearly every analyzed metric. Six days of wheel access also affected nearly every measure, indicating pervasive training effects, in both HR and C mice. As predicted, stance width was significantly narrower in HR than C mice. Paw contact area and duty factor were significantly greater in minimuscle individuals (subset of HR mice with 50%-reduced hind limb muscle mass) than in normal-muscled HR or C mice. We conclude that

  5. STRIDE II: A Water Strider-inspired Miniature Robot with Circular Footpads

    Directory of Open Access Journals (Sweden)

    Onur Ozcan

    2014-06-01

    Full Text Available Water strider insects have attracted the attention of many researchers due to their power-efficient and agile water surface locomotion. This study proposes a new water strider insect-inspired robot, called STRIDE II, which uses new circular footpads for high lift, stability and payload capability, and a new elliptical leg rotation mechanism for more efficient water surface propulsion. Using the advantage of scaling effects on surface tension versus buoyancy, similar to water strider insects, this robot uses the repulsive surface tension force on its footpads as the dominant lift principle instead of creating buoyancy by using very skinny (1 mm diameter circular footpads coated with a superhydrophobic material. The robot and the insect propel quickly and power efficiently on the water surface by the sculling motion of their two side-legs, which never break the water surface completely. This paper proposes models for the lift, drag and propulsion forces and the energy efficiency of the proposed legged robot, and experiments are conducted to verify these models. After optimizing the robot design using the lift models, a maximum lift capacity of 55 grams is achieved using 12 footpads with a 4.2 cm outer diameter, while the robot itself weighs 21.75 grams. For this robot, a propulsion efficiency of 22.3% was measured. The maximum forward and turning speeds of the robot were measured as 71.5 mm/sec and 0.21 rad/sec, respectively. These water strider robots could be used in water surface monitoring, cleaning and analysis in lakes, dams, rivers and the sea.

  6. The effect of rider weight and additional weight in Icelandic horses in tölt: part II. Stride parameters responses.

    Science.gov (United States)

    Gunnarsson, V; Stefánsdóttir, G J; Jansson, A; Roepstorff, L

    2017-09-01

    This study investigated the effects of rider weight in the BW ratio (BWR) range common for Icelandic horses (20% to 35%), on stride parameters in tölt in Icelandic horses. The kinematics of eight experienced Icelandic school horses were measured during an incremental exercise test using a high-speed camera (300 frames/s). Each horse performed five phases (642 m each) in tölt at a BWR between rider (including saddle) and horse starting at 20% (BWR20) and increasing to 25% (BWR25), 30% (BWR30), 35% (BWR35) and finally 20% (BWR20b) was repeated. One professional rider rode all horses and weight (lead) was added to saddle and rider as needed. For each phase, eight strides at speed of 5.5 m/s were analyzed for stride duration, stride frequency, stride length, duty factor (DF), lateral advanced placement, lateral advanced liftoff, unipedal support (UPS), bipedal support (BPS) and height of front leg action. Stride length became shorter (Y=2.73-0.004x; P0.05). In conclusion, increased BWR decreased stride length and increased DF proportionally to the same extent in all limbs, whereas BPS increased at the expense of decreased UPS. These changes can be expected to decrease tölt quality when subjectively evaluated according to the breeding goals for the Icelandic horse. However, beat, symmetry and height of front leg lifting were not affected by BWR.

  7. Gender-based differences in stride and limb dimensions between healthy red-wing tinamou (Rhynchotus rufescens) Temminck, 1815

    OpenAIRE

    QUEIROZ, Sandra Aidar de; COOPER, Ross Gordon

    2014-01-01

    The red-wing tinamou (Rhynchotus rufescens) is economically important as food. The current study investigated the limb and trunk characteristics in age-matched [year-of-hatch (yoh) 2004 and 2005], gender segregated birds, and determined differences in stride between cocks and hens. The locomotion trial was completed in a corridor of 0.6 × 2.36 m dimension. The girth was significantly higher in cocks than in hens, while body weight was slightly higher in hens. Cocks had a greater height than h...

  8. Gender-based differences in stride and limb dimensions between healthy red-wing tinamou (Rhynchotus rufescens) Temminck, 1815

    OpenAIRE

    de Queiroz, Sandra Aidar [UNESP; Cooper, Ross Gordon

    2011-01-01

    The red-wing tinamou (Rhynchotus rufescens) is economically important as food. The current study investigated the limb and trunk characteristics in age-matched [year-of-hatch (yoh) 2004 and 2005], gender segregated birds, and determined differences in stride between cocks and hens. The locomotion trial was completed in a corridor of 0.6 x 2.36 m dimension. The girth was significantly higher in cocks than in hens, while body weight was slightly higher in hens. Cocks had a greater height than h...

  9. Test-retest reliability of stride time variability while dual tasking in healthy and demented adults with frontotemporal degeneration

    Directory of Open Access Journals (Sweden)

    Herrmann Francois R

    2011-07-01

    Full Text Available Abstract Background Although test-retest reliability of mean values of spatio-temporal gait parameters has been assessed for reliability while walking alone (i.e., single tasking, little is known about the test-retest reliability of stride time variability (STV while performing an attention demanding-task (i.e., dual tasking. The objective of this study was to examine immediate test-retest reliability of STV while single and dual tasking in cognitively healthy older individuals (CHI and in demented patients with frontotemporal degeneration (FTD. Methods Based on a cross-sectional design, 69 community-dwelling CHI (mean age 75.5 ± 4.3; 43.5% women and 14 demented patients with FTD (mean age 65.7 ± 9.8 years; 6.7% women walked alone (without performing an additional task; i.e., single tasking and while counting backward (CB aloud starting from 50 (i.e., dual tasking. Each subject completed two trials for all the testing conditions. The mean value and the coefficient of variation (CoV of stride time while walking alone and while CB at self-selected walking speed were measured using GAITRite® and SMTEC® footswitch systems. Results ICC of mean value in CHI under both walking conditions were higher than ICC of demented patients with FTD and indicated perfect reliability (ICC > 0.80. Reliability of mean value was better while single tasking than dual tasking in CHI (ICC = 0.96 under single-task and ICC = 0.86 under dual-task, whereas it was the opposite in demented patients (ICC = 0.65 under single-task and ICC = 0.81 under dual-task. ICC of CoV was slight to poor whatever the group of participants and the walking condition (ICC Conclusions The immediate test-retest reliability of the mean value of stride time in single and dual tasking was good in older CHI as well as in demented patients with FTD. In contrast, the variability of stride time was low in both groups of participants.

  10. Effect of Different Training Methods on Stride Parameters in Speed Maintenance Phase of 100-m Sprint Running.

    Science.gov (United States)

    Cetin, Emel; Hindistan, I Ethem; Ozkaya, Y Gul

    2018-05-01

    Cetin, E, Hindistan, IE, Ozkaya, YG. Effect of different training methods on stride parameters in speed maintenance phase of 100-m sprint running. J Strength Cond Res 32(5): 1263-1272, 2018-This study examined the effects of 2 different training methods relevant to sloping surface on stride parameters in speed maintenance phase of 100-m sprint running. Twenty recreationally active students were assigned into one of 3 groups: combined training (Com), horizontal training (H), and control (C) group. Com group performed uphill and downhill training on a sloping surface with an angle of 4°, whereas H group trained on a horizontal surface, 3 days a week for 8 weeks. Speed maintenance and deceleration phases were divided into distances with 10-m intervals, and running time (t), running velocity (RV), step frequency (SF), and step length (SL) were measured at preexercise, and postexercise period. After 8 weeks of training program, t was shortened by 3.97% in Com group, and 2.37% in H group. Running velocity also increased for totally 100 m of running distance by 4.13 and 2.35% in Com, and H groups, respectively. At the speed maintenance phase, although t and maximal RV (RVmax) found to be statistically unaltered during overall phase, t was found to be decreased, and RVmax was preceded by 10 m in distance in both training groups. Step length was increased at 60-70 m, and SF was decreased at 70-80 m in H group. Step length was increased with concomitant decrease in SF at 80-90 m in Com group. Both training groups maintained the RVmax with a great percentage at the speed maintenance phase. In conclusion, although both training methods resulted in an increase in running time and RV, Com training method was more prominently effective method in improving RV, and this improvement was originated from the positive changes in SL during the speed maintaining phase.

  11. Randomized Controlled Trial Comparing Exercise to Health Education for Stimulant Use Disorder: Results From the CTN-0037 STimulant Reduction Intervention Using Dosed Exercise (STRIDE) Study.

    Science.gov (United States)

    Trivedi, Madhukar H; Greer, Tracy L; Rethorst, Chad D; Carmody, Thomas; Grannemann, Bruce D; Walker, Robrina; Warden, Diane; Shores-Wilson, Kathy; Stoutenberg, Mark; Oden, Neal; Silverstein, Meredith; Hodgkins, Candace; Love, Lee; Seamans, Cindy; Stotts, Angela; Causey, Trey; Szucs-Reed, Regina P; Rinaldi, Paul; Myrick, Hugh; Straus, Michele; Liu, David; Lindblad, Robert; Church, Timothy; Blair, Steven N; Nunes, Edward V

    To evaluate exercise as a treatment for stimulant use disorders. The STimulant Reduction Intervention using Dosed Exercise (STRIDE) study was a randomized clinical trial conducted in 9 residential addiction treatment programs across the United States from July 2010 to February 2013. Of 497 adults referred to the study, 302 met all eligibility criteria, including DSM-IV criteria for stimulant abuse and/or dependence, and were randomized to either a dosed exercise intervention (Exercise) or a health education intervention (Health Education) control, both augmenting treatment as usual and conducted thrice weekly for 12 weeks. The primary outcome of percent stimulant abstinent days during study weeks 4 to 12 was estimated using a novel algorithm adjustment incorporating self-reported Timeline Followback (TLFB) stimulant use and urine drug screen (UDS) data. Mean percent of abstinent days based on TLFB was 90.8% (SD = 16.4%) for Exercise and 91.6% (SD = 14.7%) for Health Education participants. Percent of abstinent days using the eliminate contradiction (ELCON) algorithm was 75.6% (SD = 27.4%) for Exercise and 77.3% (SD = 25.1%) for Health Education. The primary intent-to-treat analysis, using a mixed model controlling for site and the ELCON algorithm, produced no treatment effect (P = .60). In post hoc analyses controlling for treatment adherence and baseline stimulant use, Exercise participants had a 4.8% higher abstinence rate (78.7%) compared to Health Education participants (73.9%) (P = .03, number needed to treat = 7.2). The primary analysis indicated no significant difference between exercise and health education. Adjustment for intervention adherence showed modestly but significantly higher percent of abstinent days in the exercise group, suggesting that exercise may improve outcomes for stimulant users who have better adherence to an exercise dose. ClinicalTrials.gov identifier: NCT01141608. © Copyright 2017 Physicians Postgraduate Press, Inc.

  12. Walking training with cueing of cadence improves walking speed and stride length after stroke more than walking training alone: a systematic review.

    Science.gov (United States)

    Nascimento, Lucas R; de Oliveira, Camila Quel; Ada, Louise; Michaelsen, Stella M; Teixeira-Salmela, Luci F

    2015-01-01

    After stroke, is walking training with cueing of cadence superior to walking training alone in improving walking speed, stride length, cadence and symmetry? Systematic review with meta-analysis of randomised or controlled trials. Adults who have had a stroke. Walking training with cueing of cadence. Four walking outcomes were of interest: walking speed, stride length, cadence and symmetry. This review included seven trials involving 211 participants. Because one trial caused substantial statistical heterogeneity, meta-analyses were conducted with and without this trial. Walking training with cueing of cadence improved walking speed by 0.23 m/s (95% CI 0.18 to 0.27, I(2)=0%), stride length by 0.21 m (95% CI 0.14 to 0.28, I(2)=18%), cadence by 19 steps/minute (95% CI 14 to 23, I(2)=40%), and symmetry by 15% (95% CI 3 to 26, random effects) more than walking training alone. This review provides evidence that walking training with cueing of cadence improves walking speed and stride length more than walking training alone. It may also produce benefits in terms of cadence and symmetry of walking. The evidence appears strong enough to recommend the addition of 30 minutes of cueing of cadence to walking training, four times a week for 4 weeks, in order to improve walking in moderately disabled individuals with stroke. PROSPERO (CRD42013005873). Copyright © 2014 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.

  13. Steps toward improving diet and exercise for cancer survivors (STRIDE): a quasi-randomised controlled trial protocol.

    Science.gov (United States)

    Frensham, Lauren J; Zarnowiecki, Dorota M; Parfitt, Gaynor; Stanley, Rebecca M; Dollman, James

    2014-06-13

    Cancer survivorship rates have increased in developed countries largely due to population ageing and improvements in cancer care. Survivorship is a neglected phase of cancer treatment and is often associated with adverse physical and psychological effects. There is a need for broadly accessible, non-pharmacological measures that may prolong disease-free survival, reduce or alleviate co-morbidities and enhance quality of life. The aim of the Steps TowaRd Improving Diet and Exercise (STRIDE) study is to evaluate the effectiveness of an online-delivered physical activity intervention for increasing walking in cancer survivors living in metropolitan and rural areas of South Australia. This is a quasi-randomised controlled trial. The intervention period is 12-weeks with 3-month follow-up. The trial will be conducted at a university setting and community health services in South Australia. Participants will be insufficiently active and aged 18 years or older. Participants will be randomly assigned to either the intervention or control group. All participants will receive a pedometer but only the intervention group will have access to the STRIDE website where they will report steps, affect and ratings of perceived exertion (RPE) during exercise daily. Researchers will use these variables to individualise weekly step goals to increase walking.The primary outcome measure is steps per day. The secondary outcomes are a) health measures (anthropometric and physiological), b) dietary habits (consumption of core foods and non-core foods) and c) quality of life (QOL) including physical, psychological and social wellbeing. Measures will be collected at baseline, post-intervention and 3-month follow-up. This protocol describes the implementation of a trial using an online resource to assist cancer survivors to become more physically active. It is an innovative tool that uses ratings of perceived exertion and daily affect to create individualised step goals for cancer survivors. The

  14. Giant strides radurisation

    International Nuclear Information System (INIS)

    Basson, R.

    1986-01-01

    High Energy Processing (HEPRO) plans to establish a new commercial irradiation plant at Cape Town. Initially HEPRO experienced some problems when first established in 1982 at Tzaneen. This includes scepticism on the part of farmers as to wether radurised produce would command a sufficient mark-up to recover the treatment cost and the problem of motivating the retailers and wholesalers who purchase on the national fresh produce markets. A few of the large supermarket chains were eventually convinced to buy, radurised products. After periods of up to two years the chains are largely convinced of the advantages of radurisation. On 5 June 1985 HEPRO and the Atomic Energy Corporation of South Africa (AEC) signed an agreement according to which the AEC would design and manufacture all the equipment required for the new irradiation facility at Cape Town

  15. Lower extremity joint loads in habitual rearfoot and mid/forefoot strike runners with normal and shortened stride lengths.

    Science.gov (United States)

    Boyer, Elizabeth R; Derrick, Timothy R

    2018-03-01

    Our purpose was to compare joint loads between habitual rearfoot (hRF) and habitual mid/forefoot strikers (hFF), rearfoot (RFS) and mid/forefoot strike (FFS) patterns, and shorter stride lengths (SLs). Thirty-eight hRF and hFF ran at their normal SL, 5% and 10% shorter, as well as with the opposite foot strike. Three-dimensional ankle, knee, patellofemoral (PF) and hip contact forces were calculated. Nearly all contact forces decreased with a shorter SL (1.2-14.9% relative to preferred SL). In general, hRF had higher PF (hRF-RFS: 10.8 ± 1.4, hFF-FFS: 9.9 ± 2.0 BWs) and hip loads (axial hRF-RFS: -9.9 ± 0.9, hFF-FFS: -9.6 ± 1.0 BWs) than hFF. Many loads were similar between foot strike styles for the two groups, including axial and lateral hip, PF, posterior knee and shear ankle contact forces. Lateral knee and posterior hip contact forces were greater for RFS, and axial ankle and knee contact forces were greater for FFS. The tibia may be under greater loading with a FFS because of these greater axial forces. Summarising, a particular foot strike style does not universally decrease joint contact forces. However, shortening one's SL 10% decreased nearly all lower extremity contact forces, so it may hold potential to decrease overuse injuries associated with excessive joint loads.

  16. Partial body weight support treadmill training speed influences paretic and non-paretic leg muscle activation, stride characteristics, and ratings of perceived exertion during acute stroke rehabilitation.

    Science.gov (United States)

    Burnfield, Judith M; Buster, Thad W; Goldman, Amy J; Corbridge, Laura M; Harper-Hanigan, Kellee

    2016-06-01

    Intensive task-specific training is promoted as one approach for facilitating neural plastic brain changes and associated motor behavior gains following neurologic injury. Partial body weight support treadmill training (PBWSTT), is one task-specific approach frequently used to improve walking during the acute period of stroke recovery (training parameters and physiologic demands during this early recovery phase. To examine the impact of four walking speeds on stride characteristics, lower extremity muscle demands (both paretic and non-paretic), Borg ratings of perceived exertion (RPE), and blood pressure. A prospective, repeated measures design was used. Ten inpatients post unilateral stroke participated. Following three familiarization sessions, participants engaged in PBWSTT at four predetermined speeds (0.5, 1.0, 1.5 and 2.0mph) while bilateral electromyographic and stride characteristic data were recorded. RPE was evaluated immediately following each trial. Stride length, cadence, and paretic single limb support increased with faster walking speeds (p⩽0.001), while non-paretic single limb support remained nearly constant. Faster walking resulted in greater peak and mean muscle activation in the paretic medial hamstrings, vastus lateralis and medial gastrocnemius, and non-paretic medial gastrocnemius (p⩽0.001). RPE also was greatest at the fastest compared to two slowest speeds (ptraining at the slowest speeds. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Together We STRIDE: A quasi-experimental trial testing the effectiveness of a multi-level obesity intervention for Hispanic children in rural communities.

    Science.gov (United States)

    Ko, Linda K; Rillamas-Sun, Eileen; Bishop, Sonia; Cisneros, Oralia; Holte, Sarah; Thompson, Beti

    2018-04-01

    Hispanic children are disproportionally overweight and obese compared to their non-Hispanic white counterparts in the US. Community-wide, multi-level interventions have been successful to promote healthier nutrition, increased physical activity (PA), and weight loss. Using community-based participatory approach (CBPR) that engages community members in rural Hispanic communities is a promising way to promote behavior change, and ultimately weight loss among Hispanic children. Led by a community-academic partnership, the Together We STRIDE (Strategizing Together Relevant Interventions for Diet and Exercise) aims to test the effectiveness of a community-wide, multi-level intervention to promote healthier diets, increased PA, and weight loss among Hispanic children. The Together We STRIDE is a parallel quasi-experimental trial with a goal of recruiting 900 children aged 8-12 years nested within two communities (one intervention and one comparison). Children will be recruited from their respective elementary schools. Components of the 2-year multi-level intervention include comic books (individual-level), multi-generational nutrition and PA classes (family-level), teacher-led PA breaks and media literacy education (school-level), family nights, a farmer's market and a community PA event (known as ciclovia) at the community-level. Children from the comparison community will receive two newsletters. Height and weight measures will be collected from children in both communities at three time points (baseline, 6-months, and 18-months). The Together We STRIDE study aims to promote healthier diet and increased PA to produce healthy weight among Hispanic children. The use of CBPR approach and the engagement of the community will springboard strategies for intervention' sustainability. Clinical Trials Registration Number: NCT02982759 Retrospectively registered. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Estimating Stair Running Performance Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Lauro V. Ojeda

    2017-11-01

    Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.

  19. Research staff training in a multisite randomized clinical trial: Methods and recommendations from the Stimulant Reduction Intervention using Dosed Exercise (STRIDE) trial.

    Science.gov (United States)

    Walker, Robrina; Morris, David W; Greer, Tracy L; Trivedi, Madhukar H

    2014-01-01

    Descriptions of and recommendations for meeting the challenges of training research staff for multisite studies are limited despite the recognized importance of training on trial outcomes. The STRIDE (STimulant Reduction Intervention using Dosed Exercise) study is a multisite randomized clinical trial that was conducted at nine addiction treatment programs across the United States within the National Drug Abuse Treatment Clinical Trials Network (CTN) and evaluated the addition of exercise to addiction treatment as usual (TAU), compared to health education added to TAU, for individuals with stimulant abuse or dependence. Research staff administered a variety of measures that required a range of interviewing, technical, and clinical skills. In order to address the absence of information on how research staff are trained for multisite clinical studies, the current manuscript describes the conceptual process of training and certifying research assistants for STRIDE. Training was conducted using a three-stage process to allow staff sufficient time for distributive learning, practice, and calibration leading up to implementation of this complex study. Training was successfully implemented with staff across nine sites. Staff demonstrated evidence of study and procedural knowledge via quizzes and skill demonstration on six measures requiring certification. Overall, while the majority of staff had little to no experience in the six measures, all research assistants demonstrated ability to correctly and reliably administer the measures throughout the study. Practical recommendations are provided for training research staff and are particularly applicable to the challenges encountered with large, multisite trials.

  20. Can Tai Chi training impact fractal stride time dynamics, an index of gait health, in older adults? Cross-sectional and randomized trial studies.

    Directory of Open Access Journals (Sweden)

    Brian J Gow

    Full Text Available To determine if Tai Chi (TC has an impact on long-range correlations and fractal-like scaling in gait stride time dynamics, previously shown to be associated with aging, neurodegenerative disease, and fall risk.Using Detrended Fluctuation Analysis (DFA, this study evaluated the impact of TC mind-body exercise training on stride time dynamics assessed during 10 minute bouts of overground walking. A hybrid study design investigated long-term effects of TC via a cross-sectional comparison of 27 TC experts (24.5 ± 11.8 yrs experience and 60 age- and gender matched TC-naïve older adults (50-70 yrs. Shorter-term effects of TC were assessed by randomly allocating TC-naïve participants to either 6 months of TC training or to a waitlist control. The alpha (α long-range scaling coefficient derived from DFA and gait speed were evaluated as outcomes.Cross-sectional comparisons using confounder adjusted linear models suggest that TC experts exhibited significantly greater long-range scaling of gait stride time dynamics compared with TC-naïve adults. Longitudinal random-slopes with shared baseline models accounting for multiple confounders suggest that the effects of shorter-term TC training on gait dynamics were not statistically significant, but trended in the same direction as longer-term effects although effect sizes were very small. In contrast, gait speed was unaffected in both cross-sectional and longitudinal comparisons.These preliminary findings suggest that fractal-like measures of gait health may be sufficiently precise to capture the positive effects of exercise in the form of Tai Chi, thus warranting further investigation. These results motivate larger and longer-duration trials, in both healthy and health-challenged populations, to further evaluate the potential of Tai Chi to restore age-related declines in gait dynamics.The randomized trial component of this study was registered at ClinicalTrials.gov (NCT01340365.

  1. Effects of a wearable exoskeleton stride management assist system (SMA®) on spatiotemporal gait characteristics in individuals after stroke: a randomized controlled trial.

    Science.gov (United States)

    Buesing, Carolyn; Fisch, Gabriela; O'Donnell, Megan; Shahidi, Ida; Thomas, Lauren; Mummidisetty, Chaithanya K; Williams, Kenton J; Takahashi, Hideaki; Rymer, William Zev; Jayaraman, Arun

    2015-08-20

    Robots offer an alternative, potentially advantageous method of providing repetitive, high-dosage, and high-intensity training to address the gait impairments caused by stroke. In this study, we compared the effects of the Stride Management Assist (SMA®) System, a new wearable robotic device developed by Honda R&D Corporation, Japan, with functional task specific training (FTST) on spatiotemporal gait parameters in stroke survivors. A single blinded randomized control trial was performed to assess the effect of FTST and task-specific walking training with the SMA® device on spatiotemporal gait parameters. Participants (n=50) were randomly assigned to FTST or SMA. Subjects in both groups received training 3 times per week for 6-8 weeks for a maximum of 18 training sessions. The GAITRite® system was used to collect data on subjects' spatiotemporal gait characteristics before training (baseline), at mid-training, post-training, and at a 3-month follow-up. After training, significant improvements in gait parameters were observed in both training groups compared to baseline, including an increase in velocity and cadence, a decrease in swing time on the impaired side, a decrease in double support time, an increase in stride length on impaired and non-impaired sides, and an increase in step length on impaired and non-impaired sides. No significant differences were observed between training groups; except for SMA group, step length on the impaired side increased significantly during self-selected walking speed trials and spatial asymmetry decreased significantly during fast-velocity walking trials. SMA and FTST interventions provided similar, significant improvements in spatiotemporal gait parameters; however, the SMA group showed additional improvements across more parameters at various time points. These results indicate that the SMA® device could be a useful therapeutic tool to improve spatiotemporal parameters and contribute to improved functional mobility in

  2. Slope Estimation during Normal Walking Using a Shank-Mounted Inertial Sensor

    Directory of Open Access Journals (Sweden)

    Juan C. Álvarez

    2012-08-01

    Full Text Available In this paper we propose an approach for the estimation of the slope of the walking surface during normal walking using a body-worn sensor composed of a biaxial accelerometer and a uniaxial gyroscope attached to the shank. It builds upon a state of the art technique that was successfully used to estimate the walking velocity from walking stride data, but did not work when used to estimate the slope of the walking surface. As claimed by the authors, the reason was that it did not take into account the actual inclination of the shank of the stance leg at the beginning of the stride (mid stance. In this paper, inspired by the biomechanical characteristics of human walking, we propose to solve this issue by using the accelerometer as a tilt sensor, assuming that at mid stance it is only measuring the gravity acceleration. Results from a set of experiments involving several users walking at different inclinations on a treadmill confirm the feasibility of our approach. A statistical analysis of slope estimations shows in first instance that the technique is capable of distinguishing the different slopes of the walking surface for every subject. It reports a global RMS error (per-unit difference between actual and estimated inclination of the walking surface for each stride identified in the experiments of 0.05 and this can be reduced to 0.03 with subject-specific calibration and post processing procedures by means of averaging techniques.

  3. Fourier-based integration of quasi-periodic gait accelerations for drift-free displacement estimation using inertial sensors.

    Science.gov (United States)

    Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea

    2015-11-23

    In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear

  4. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length

    DEFF Research Database (Denmark)

    Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano

    2016-01-01

    Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish...

  5. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    Directory of Open Access Journals (Sweden)

    Morten B. S. Svendsen

    2016-10-01

    Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.

  6. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  7. Strides in Preservation of Malawi's Natural Stone

    Science.gov (United States)

    Kamanga, Tamara; Chisenga, Chikondi; Katonda, Vincent

    2017-04-01

    The geology of Malawi is broadly grouped into four main lithological units that is the Basement Complex, the Karoo Super group, Tertiary to Quaternary sedimentary deposits and the Chilwa Alkaline province. The basement complex rocks cover much of the country and range in age from late Precambrian to early Paleozoic. They have been affected by three major phases of deformation and metamorphism that is the Irumide, Ubendian and The Pan-African. These rocks comprise gneisses, granulites and schists with associated mafic, ultramafic, syenites and granite rocks. The Karoo System sedimentary rocks range in age from Permian to lower Jurassic and are mainly restricted to two areas in the extreme North and extreme Alkaline Province - late Jurassic to Cretaceous in age, preceded by upper Karoo Dolerite dyke swarms and basaltic lavas, have been intruded into the Basement Complex gneisses of southern Malawi. Malawi is endowed with different types of natural stone deposits most of which remain unexploited and explored. Over twenty quarry operators supply quarry stone for road and building construction in Malawi. Hundreds of artisanal workers continue to supply aggregate stones within and on the outskirts of urban areas. Ornamental stones and granitic dimension stones are also quarried, but in insignificant volumes. In Northern Malawi, there are several granite deposits including the Nyika, which is the largest single outcrop occupying approximately 260.5 km2 , Mtwalo Amazonite an opaque to translucent bluish -green variety of microcline feldspar that occurs in alkali granites and pegmatite, the Ilomba granite (sodalite) occurring in small areas within biotite; apatite, plagioclase and calcite. In the Center, there are the Dzalanyama granites, and the Sani granites. In the South, there are the Mangochi granites. Dolerite and gabbroic rocks spread across the country, treading as black granites. Malawi is also endowed with many deposits of marble. A variety of other igneous, metamorphic and sedimentary rocks are also used as dimension stones. Discovery and preservation of more natural stone deposits through research is essential in the country .Natural stone preservation has not only the potential to generate significant direct and indirect economic benefits for Malawi but also to preserve its heritage .

  8. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  9. Estimating Utility

    DEFF Research Database (Denmark)

    Arndt, Channing; Simler, Kenneth R.

    2010-01-01

    A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...

  10. Evaluation of scale invariance in physiological signals by means of balanced estimation of diffusion entropy

    Science.gov (United States)

    Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong

    2012-11-01

    By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.

  11. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  12. Toward ambulatory balance assessment: Estimating variability and stability from short bouts of gait

    NARCIS (Netherlands)

    van Schooten, K.S.; Rispens, S.M.; Elders, P.J.M.; van Dieen, J.H.; Pijnappels, M.A.G.M.

    2014-01-01

    Stride-to-stride variability and local dynamic stability of gait kinematics are promising measures to identify individuals at increased risk of falling. This study aimed to explore the feasibility of using these metrics in clinical practice and ambulatory assessment, where only a small number of

  13. Hope and major strides for genetic diseases of the eye

    Indian Academy of Sciences (India)

    2009-12-31

    Dec 31, 2009 ... genetic etiology of inherited eye diseases and their underly- ing pathophysiology in the ... recent advances in the field of ophthalmic genetics. There have been .... sible or unlikely to be developed in the near future. Many of.

  14. Unravelling Copenhagen's stride into the Anthropocene using lake sediments

    Science.gov (United States)

    Schreiber, Norman; Andersen, Thorbjørn J.; Frei, Robert; Ilsøe, Peter; Louchouarn, Patrick; Andersen, Kenneth; Funder, Svend; Rasmussen, Peter; Andresen, Camilla S.; Odgaard, Bent; Kjær, Kurt H.

    2014-05-01

    Industrialization including the effects of expanding energy consumption and metallurgy production as well as population growth and demographic pressure increased heavy-metal pollution loads progressively since the Industrial Revolution. Especially the burning of fossil fuels mobilizes heavy metals like lead and zinc on a large scale. By wet and dry deposition, these loads end up in the aquatic environment where sediments serve as sinks for these contaminations. In this study, we examine the pollution history of Copenhagen, Denmark. A sediment core was retrieved for the lake in the Botanical Gardens in central Copenhagen using a rod-operated piston corer. The water body used to be part of the old town's defence-wall system and was turned into a lake by terrain levelling in the mid 17th century. After initial X-ray fluorescence core scanning, element concentrations were determined using emission spectroscopy. The onset of gyttja accumulation in the lake is assumed to start immediately after the construction of the fortification in approximately AD 1645. An age model representing the last approximately 135 years for the uppermost 60cm was established by lead-210 and cesium-137 dating. The older part was dated via recognition of markedly increased levels of levoglucosan which are interpreted to be linked with recorded fires in Copenhagen. Similarly, two distinct layers interstratify the sediment column and mark pronounced increases of minerogenic material inflow which can be linked to known historical events. Significant pollution load increases are evident from the 1700s along with urban growth and extended combustion of carbon carriers fuels such as wood and coals. However, a more pronounced increase in lead and zinc deposition only begins by the mid-19th century. Maxima for the latter two pollutants are reached in the late 1970s followed by a reduction of emissions in accordance with stricter environmental regulations. Here, especially the phasing-out of tetraethyl lead from gasoline and increased cleaning of the emissions from local power plants have had an effect. Also a change of fuel from coal to natural gas in the power plants has been very important. The present study shows how a detailed record of past levels of air pollution in large cities may be achieved by analyzing the sediment accumulated in urban lakes provided that a reliable chronology can be established.

  15. The Worker Rights Consortium Makes Strides toward Legitimacy.

    Science.gov (United States)

    Van der Werf, Martin

    2000-01-01

    Discusses the rapid growth of the Workers Rights Consortium, a student-originated group with 44 member institutions which opposes sweatshop labor conditions especially in the apparel industry. Notes disagreements about the number of administrators on the board of directors and about the role of industry representives. Compares this group with the…

  16. Striding Toward Social Justice: The Ecologic Milieu of Physical Activity

    Science.gov (United States)

    Lee, Rebecca E.; Cubbin, Catherine

    2009-01-01

    Disparities in physical activity should be investigated in light of social justice principles. This manuscript critically evaluates evidence and trends in disparities research within an ecologic framework, focusing on multi-level factors such as neighborhood and racial discrimination that influence physical activity. Discussion focuses on strategies for integrating social justice into physical activity promotion and intervention programming within an ecologic framework. PMID:19098519

  17. TAKING MULTI MODE RESEARCH STRIDES DURING THEINNOVATION OF ACRICKETCOMPETITIVE INTELLIGENCEFRAMEWORK

    Directory of Open Access Journals (Sweden)

    Liandi van den Berg

    2017-01-01

    Full Text Available This paperdescribesthemulti-mode research methodological stepsduring thedevelopment of a competitive intelligence (CIframework forcricketcoaches.Currently no framework exist to guide coaches to gain a competitive advantagethrough competitor analysis.Asystematic literature review (SLR ascertainedthesimilarities and differences betweenthebusiness CI and sport coaching andperformance analysis(PAdomains.The qualitative document analysisperformedinATLAS.TITMrendered a reputable inter-and intra-document analysis validitywith #954; =0.79 and 0.78 respectively. Thedocument analysiscontributedtowardsthe compilation ofa semi-structured interview schedule to investigate thebusiness-related CI process occurrence within the sport coaching context. Theinterview schedule was finalised afteruniversity-peers’interviewsprovided inputon the proposed schedule.Thereafter data collection entailedsemi-structuredinterviews with high-level cricket coachesand support staffonCI activities intheir coaching practices.The coach interviews wereverbatimtranscribed andanalysed with ATLAS.TITM.A codebook of the codescreatedin the analysis wascompiled.The researcherestablished the inter-and intra-reliability with a Cohens’Kappa of 0.8. A constant comparative method of data analysisguided theanalysis,whichwas performeduntildata saturationwas reached. The4338interview code incidenceswere quantitized #8210;theconversion of qualitative datatonumerical data.Acoefficient cluster analyses onallindices detectedclusterswitha linkage distanceset at fourwas performed,from which five themes emerged.The71codes were conceptually concatenated into28categories, linked to the fivedifferent themes. The multi-method research design rendered a conceptual andapplicableCIframework for cricket coaches.

  18. Variable Kernel Density Estimation

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  19. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  20. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  1. Categorical working memory representations are used in delayed estimation of continuous colors.

    Science.gov (United States)

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2017-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors

    Science.gov (United States)

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2016-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548

  3. A neural flow estimator

    DEFF Research Database (Denmark)

    Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik

    1995-01-01

    This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...

  4. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  5. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time

  6. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  7. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  8. Cost function estimation

    DEFF Research Database (Denmark)

    Andersen, C K; Andersen, K; Kragh-Sørensen, P

    2000-01-01

    on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...

  9. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  10. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  11. Coherence in quantum estimation

    Science.gov (United States)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  12. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  13. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  14. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  15. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  16. Radiation risk estimation

    International Nuclear Information System (INIS)

    Schull, W.J.; Texas Univ., Houston, TX

    1992-01-01

    Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)

  17. Estimation of Jump Tails

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Todorov, Victor

    We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....

  18. Bridged Race Population Estimates

    Data.gov (United States)

    U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...

  19. Estimation of measurement variances

    International Nuclear Information System (INIS)

    Jaech, J.L.

    1984-01-01

    The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented

  20. APLIKASI SPLINE ESTIMATOR TERBOBOT

    Directory of Open Access Journals (Sweden)

    I Nyoman Budiantara

    2001-01-01

    Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2,…,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2,…,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.

  1. Fractional cointegration rank estimation

    DEFF Research Database (Denmark)

    Lasak, Katarzyna; Velasco, Carlos

    the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...

  2. Estimation of spectral kurtosis

    Science.gov (United States)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to

  3. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  4. Ranking as parameter estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    2009-01-01

    Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf

  5. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  6. Single snapshot DOA estimation

    Science.gov (United States)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  7. Thermodynamic estimation: Ionic materials

    International Nuclear Information System (INIS)

    Glasser, Leslie

    2013-01-01

    Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy

  8. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A. [VTT Energy, Espoo (Finland)

    1996-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  9. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  10. Digital Quantum Estimation

    Science.gov (United States)

    Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo

    2017-11-01

    Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.

  11. Distribution load estimation - DLE

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A [VTT Energy, Espoo (Finland)

    1997-12-31

    The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems

  12. Limb swinging in elephants and giraffes and implications for the reconstruction of limb movements and speed estimates in large dinosaurs

    Directory of Open Access Journals (Sweden)

    A. Christian

    1999-01-01

    Full Text Available Speeds of walking dinosaurs that left fossil trackways have been estimated using the stride length times natural pendulum frequency of the limbs. In a detailed analysis of limb movements in walking Asian elephants and giraffes, however, distinct differences between actual limb movements and the predicted limb movements using only gravity as driving force were observed. Additionally, stride frequency was highly variable. Swing time was fairly constant, but especially at high walking speeds, much shorter than half the natural pendulum period. An analysis of hip and shoulder movements during walking showed that limb swinging was influenced by accelerations of hip and shoulder joints especially at high walking speeds. These results suggest an economical fast walking mechanism that could have been utilised by large dinosaurs to increase maximum speeds of locomotion. These findings throw new light on the dynamics of large vertebrates and can be used to improve speed estimates in large dinosaurs. Geschwindigkeiten gehender Dinosaurier, die fossile Fährten hinterlassen haben, wurden als Produkt aus Schrittlänge und natürlicher Pendelfrequenz der Beine abgeschätzt. Eine detaillierte Analyse der Beinbewegungen von gehenden Asiatischen Elefanten und Giraffen offenbarte allerdings klare Unterschiede zwischen den tatsächlichen Extremitätenbewegungen und den Bewegungen, die zu erwarten wären, wenn die Gravitation die einzige treibende Kraft darstellte. Zudem erwies sich die Schrittfrequenz als hochgradig variabel. Die Schwingzeit der Gliedmaßen war recht konstant, aber besonders bei hohen Gehgeschwindigkeiten viel kürzer als die halbe natürliche Pendelperiode der Extremitäten. Eine Analyse der Bewegungen der Hüft- und Schultergelenke während des Gehens zeigte, daß das Schwingen der Gliedmaßen durch Beschleunigungen dieser Gelenke beeinflußt wurde, insbesondere bei hohen Gehgeschwindigkeiten. Die Resultate legen einen ökonomischen Mechanismus

  13. Estimating Delays In ASIC's

    Science.gov (United States)

    Burke, Gary; Nesheiwat, Jeffrey; Su, Ling

    1994-01-01

    Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.

  14. Organizational flexibility estimation

    OpenAIRE

    Komarynets, Sofia

    2013-01-01

    By the help of parametric estimation the evaluation scale of organizational flexibility and its parameters was formed. Definite degrees of organizational flexibility and its parameters for the Lviv region enterprises were determined. Grouping of the enterprises under the existing scale was carried out. Special recommendations to correct the enterprises behaviour were given.

  15. On Functional Calculus Estimates

    NARCIS (Netherlands)

    Schwenninger, F.L.

    2015-01-01

    This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm

  16. Estimation of vector velocity

    DEFF Research Database (Denmark)

    2000-01-01

    Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...

  17. Quantifying IT estimation risks

    NARCIS (Netherlands)

    Kulk, G.P.; Peters, R.J.; Verhoef, C.

    2009-01-01

    A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be

  18. Numerical Estimation in Preschoolers

    Science.gov (United States)

    Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco

    2010-01-01

    Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…

  19. Estimating Gender Wage Gaps

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  20. Fast fundamental frequency estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2017-01-01

    Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...

  1. On Gnostical Estimates

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2017-01-01

    Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707

  2. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  3. Histogram Estimators of Bivariate Densities

    National Research Council Canada - National Science Library

    Husemann, Joyce A

    1986-01-01

    One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...

  4. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  5. Distribution load estimation (DLE)

    Energy Technology Data Exchange (ETDEWEB)

    Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)

    1998-08-01

    The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented

  6. Estimating ISABELLE shielding requirements

    International Nuclear Information System (INIS)

    Stevens, A.J.; Thorndike, A.M.

    1976-01-01

    Estimates were made of the shielding thicknesses required at various points around the ISABELLE ring. Both hadron and muon requirements are considered. Radiation levels at the outside of the shield and at the BNL site boundary are kept at or below 1000 mrem per year and 5 mrem/year respectively. Muon requirements are based on the Wang formula for pion spectra, and the hadron requirements on the hadron cascade program CYLKAZ of Ranft. A muon shield thickness of 77 meters of sand is indicated outside the ring in one area, and hadron shields equivalent to from 2.7 to 5.6 meters in thickness of sand above the ring. The suggested safety allowance would increase these values to 86 meters and 4.0 to 7.2 meters respectively. There are many uncertainties in such estimates, but these last figures are considered to be rather conservative

  7. Variance Function Estimation. Revision.

    Science.gov (United States)

    1987-03-01

    UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P

  8. Estimating Risk Parameters

    OpenAIRE

    Aswath Damodaran

    1999-01-01

    Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...

  9. Estimating Venezuelas Latent Inflation

    OpenAIRE

    Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo

    2011-01-01

    Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...

  10. Chernobyl source term estimation

    International Nuclear Information System (INIS)

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1990-09-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs

  11. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  12. Estimation of inspection effort

    International Nuclear Information System (INIS)

    Mullen, M.F.; Wincek, M.A.

    1979-06-01

    An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation

  13. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  14. Estimating directional epistasis

    Science.gov (United States)

    Le Rouzic, Arnaud

    2014-01-01

    Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828

  15. Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...

    African Journals Online (AJOL)

    Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...

  16. Estimation of Lung Ventilation

    Science.gov (United States)

    Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.

    Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.

  17. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...

  18. Estimating NHL Scoring Rates

    OpenAIRE

    Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research

    2011-01-01

    The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...

  19. Risk estimation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, R A.D.

    1982-10-01

    Risk assessment involves subjectivity, which makes objective decision making difficult in the nuclear power debate. The author reviews the process and uncertainties of estimating risks as well as the potential for misinterpretation and misuse. Risk data from a variety of aspects cannot be summed because the significance of different risks is not comparable. A method for including political, social, moral, psychological, and economic factors, environmental impacts, catastrophes, and benefits in the evaluation process could involve a broad base of lay and technical consultants, who would explain and argue their evaluation positions. 15 references. (DCK)

  20. Estimating Gear Teeth Stiffness

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2013-01-01

    The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....

  1. Mixtures Estimation and Applications

    CERN Document Server

    Mengersen, Kerrie; Titterington, Mike

    2011-01-01

    This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject

  2. Robust Wave Resource Estimation

    DEFF Research Database (Denmark)

    Lavelle, John; Kofoed, Jens Peter

    2013-01-01

    density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....

  3. Estimations of actual availability

    International Nuclear Information System (INIS)

    Molan, M.; Molan, G.

    2001-01-01

    Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)

  4. Comparison of variance estimators for metaanalysis of instrumental variable estimates

    NARCIS (Netherlands)

    Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.

    2016-01-01

    Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two

  5. Introduction to variance estimation

    CERN Document Server

    Wolter, Kirk M

    2007-01-01

    We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...

  6. Estimating Discount Rates

    Directory of Open Access Journals (Sweden)

    Laurence Booth

    2015-04-01

    Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.

  7. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  8. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  9. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  10. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  11. Improved Estimates of Thermodynamic Parameters

    Science.gov (United States)

    Lawson, D. D.

    1982-01-01

    Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.

  12. State estimation in networked systems

    NARCIS (Netherlands)

    Sijs, J.

    2012-01-01

    This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman

  13. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  14. Uveal melanoma: Estimating prognosis

    Directory of Open Access Journals (Sweden)

    Swathi Kaliki

    2015-01-01

    Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.

  15. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  16. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  17. A new estimator for vector velocity estimation [medical ultrasonics

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2001-01-01

    A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...

  18. Estimation of Water Quality

    International Nuclear Information System (INIS)

    Vetrinskaya, N.I.; Manasbayeva, A.B.

    1998-01-01

    Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in

  19. WAYS HIERARCHY OF ACCOUNTING ESTIMATES

    Directory of Open Access Journals (Sweden)

    ŞERBAN CLAUDIU VALENTIN

    2015-03-01

    Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.

  20. Spring Small Grains Area Estimation

    Science.gov (United States)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  1. Parameter estimation in plasmonic QED

    Science.gov (United States)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  2. Quantity Estimation Of The Interactions

    International Nuclear Information System (INIS)

    Gorana, Agim; Malkaj, Partizan; Muda, Valbona

    2007-01-01

    In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions

  3. CONDITIONS FOR EXACT CAVALIERI ESTIMATION

    Directory of Open Access Journals (Sweden)

    Mónica Tinajero-Bravo

    2014-03-01

    Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.

  4. Optimization of Barron density estimates

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; van der Meulen, E. C.

    2001-01-01

    Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001

  5. Stochastic Estimation via Polynomial Chaos

    Science.gov (United States)

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  6. Bayesian estimates of linkage disequilibrium

    Directory of Open Access Journals (Sweden)

    Abad-Grau María M

    2007-06-01

    Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.

  7. Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi

    2003-01-01

    On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)

  8. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  9. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  10. Laser cost experience and estimation

    International Nuclear Information System (INIS)

    Shofner, F.M.; Hoglund, R.L.

    1977-01-01

    This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design

  11. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  12. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  13. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  14. Radiation dose estimates for radiopharmaceuticals

    International Nuclear Information System (INIS)

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms

  15. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  16. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  17. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  18. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  19. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  20. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  1. Analytical estimates of structural behavior

    CERN Document Server

    Dym, Clive L

    2012-01-01

    Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys

  2. Phase estimation in optical interferometry

    CERN Document Server

    Rastogi, Pramod

    2014-01-01

    Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre

  3. Using convolutional neural networks to estimate time-of-flight from PET detector waveforms

    Science.gov (United States)

    Berg, Eric; Cherry, Simon R.

    2018-01-01

    Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional

  4. An Analytical Cost Estimation Procedure

    National Research Council Canada - National Science Library

    Jayachandran, Toke

    1999-01-01

    Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...

  5. Spectral unmixing: estimating partial abundances

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-01-01

    Full Text Available techniques is complicated when considering very similar spectral signatures. Iron-bearing oxide/hydroxide/sulfate minerals have similar spectral signatures. The study focuses on how could estimates of abundances of spectrally similar iron-bearing oxide...

  6. 50th Percentile Rent Estimates

    Data.gov (United States)

    Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...

  7. LPS Catch and Effort Estimation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...

  8. Exploratory shaft liner corrosion estimate

    International Nuclear Information System (INIS)

    Duncan, D.R.

    1985-10-01

    An estimate of expected corrosion degradation during the 100-year design life of the Exploratory Shaft (ES) is presented. The basis for the estimate is a brief literature survey of corrosion data, in addition to data taken by the Basalt Waste Isolation Project. The scope of the study is expected corrosion environment of the ES, the corrosion modes of general corrosion, pitting and crevice corrosion, dissimilar metal corrosion, and environmentally assisted cracking. The expected internal and external environment of the shaft liner is described in detail and estimated effects of each corrosion mode are given. The maximum amount of general corrosion degradation was estimated to be 70 mils at the exterior and 48 mils at the interior, at the shaft bottom. Corrosion at welds or mechanical joints could be significant, dependent on design. After a final determination of corrosion allowance has been established by the project it will be added to the design criteria. 10 refs., 6 figs., 5 tabs

  9. Project Cost Estimation for Planning

    Science.gov (United States)

    2010-02-26

    For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...

  10. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  11. Estimating Emissions from Railway Traffic

    DEFF Research Database (Denmark)

    Jørgensen, Morten W.; Sorenson, Spencer C.

    1998-01-01

    Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...

  12. Travel time estimation using Bluetooth.

    Science.gov (United States)

    2015-06-01

    The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...

  13. Estimating uncertainty in resolution tests

    CSIR Research Space (South Africa)

    Goncalves, DP

    2006-05-01

    Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...

  14. Estimating solar radiation in Ghana

    International Nuclear Information System (INIS)

    Anane-Fenin, K.

    1986-04-01

    The estimates of global radiation on a horizontal surface for 9 towns in Ghana, West Africa, are deduced from their sunshine data using two methods developed by Angstrom and Sabbagh. An appropriate regional parameter is determined with the first method and used to predict solar irradiation in all the 9 stations with an accuracy better than 15%. Estimation of diffuse solar irradiation by Page, Lin and Jordan and three other authors' correlation are performed and the results examined. (author)

  15. The Psychology of Cost Estimating

    Science.gov (United States)

    Price, Andy

    2016-01-01

    Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.

  16. Estimating emissions from railway traffic

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, M.W.; Sorenson, C.

    1997-07-01

    The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)

  17. Efficient, Differentially Private Point Estimators

    OpenAIRE

    Smith, Adam

    2008-01-01

    Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...

  18. Computer-Aided Parts Estimation

    OpenAIRE

    Cunningham, Adam; Smart, Robert

    1993-01-01

    In 1991, Ford Motor Company began deployment of CAPE (computer-aided parts estimating system), a highly advanced knowledge-based system designed to generate, evaluate, and cost automotive part manufacturing plans. cape is engineered on an innovative, extensible, declarative process-planning and estimating knowledge representation language, which underpins the cape kernel architecture. Many manufacturing processes have been modeled to date, but eventually every significant process in motor veh...

  19. Guideline to Estimate Decommissioning Costs

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.

  20. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  1. Weldon Spring historical dose estimate

    International Nuclear Information System (INIS)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr

  2. Weldon Spring historical dose estimate

    Energy Technology Data Exchange (ETDEWEB)

    Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.

    1986-07-01

    This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.

  3. An improved estimation and focusing scheme for vector velocity estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1999-01-01

    to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...

  4. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  5. estimating formwork striking time for concrete mixes estimating

    African Journals Online (AJOL)

    eobe

    In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.

  6. Moving Horizon Estimation and Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp

    successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....

  7. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  8. Estimation of effective wind speed

    Science.gov (United States)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  9. Estimation and valuation in accounting

    Directory of Open Access Journals (Sweden)

    Cicilia Ionescu

    2014-03-01

    Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.

  10. Integral Criticality Estimators in MCATK

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory

    2016-06-14

    The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.

  11. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  12. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  13. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  14. Estimation of strong ground motion

    International Nuclear Information System (INIS)

    Watabe, Makoto

    1993-01-01

    Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event

  15. Multicollinearity and maximum entropy leuven estimator

    OpenAIRE

    Sudhanshu Mishra

    2004-01-01

    Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.

  16. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  17. Collider Scaling and Cost Estimation

    International Nuclear Information System (INIS)

    Palmer, R.B.

    1986-01-01

    This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs

  18. Helicopter Toy and Lift Estimation

    Science.gov (United States)

    Shakerin, Said

    2013-01-01

    A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)

  19. Estimation of potential uranium resources

    International Nuclear Information System (INIS)

    Curry, D.L.

    1977-09-01

    Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates

  20. An Improved Cluster Richness Estimator

    Energy Technology Data Exchange (ETDEWEB)

    Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  1. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  2. Estimation of Motion Vector Fields

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1993-01-01

    This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....

  3. Multispacecraft current estimates at swarm

    DEFF Research Database (Denmark)

    Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.

    2015-01-01

    During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...

  4. Estimating Swedish biomass energy supply

    International Nuclear Information System (INIS)

    Johansson, J.; Lundqvist, U.

    1999-01-01

    Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy

  5. Estimates of wildland fire emissions

    Science.gov (United States)

    Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao

    2013-01-01

    Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...

  6. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  7. Fuel Estimation Using Dynamic Response

    National Research Council Canada - National Science Library

    Hines, Michael S

    2007-01-01

    ...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...

  8. Empirical estimates of the NAIRU

    DEFF Research Database (Denmark)

    Madsen, Jakob Brøchner

    2005-01-01

    equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...

  9. Online Wavelet Complementary velocity Estimator.

    Science.gov (United States)

    Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin

    2018-02-01

    In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Load Estimation from Modal Parameters

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández

    2007-01-01

    In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...

  11. Gini estimation under infinite variance

    NARCIS (Netherlands)

    A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)

    2018-01-01

    textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient

  12. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  13. Correlation Dimension Estimation for Classification

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2006-01-01

    Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics

  14. Molecular pathology and age estimation.

    Science.gov (United States)

    Meissner, Christoph; Ritz-Timme, Stefanie

    2010-12-15

    Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. A novel walking speed estimation scheme and its application to treadmill control for gait rehabilitation.

    Science.gov (United States)

    Yoon, Jungwon; Park, Hyung-Soon; Damiano, Diane Louise

    2012-08-28

    Virtual reality (VR) technology along with treadmill training (TT) can effectively provide goal-oriented practice and promote improved motor learning in patients with neurological disorders. Moreover, the VR + TT scheme may enhance cognitive engagement for more effective gait rehabilitation and greater transfer to over ground walking. For this purpose, we developed an individualized treadmill controller with a novel speed estimation scheme using swing foot velocity, which can enable user-driven treadmill walking (UDW) to more closely simulate over ground walking (OGW) during treadmill training. OGW involves a cyclic acceleration-deceleration profile of pelvic velocity that contrasts with typical treadmill-driven walking (TDW), which constrains a person to walk at a preset constant speed. In this study, we investigated the effects of the proposed speed adaptation controller by analyzing the gait kinematics of UDW and TDW, which were compared to those of OGW at three pre-determined velocities. Ten healthy subjects were asked to walk in each mode (TDW, UDW, and OGW) at three pre-determined speeds (0.5 m/s, 1.0 m/s, and 1.5 m/s) with real time feedback provided through visual displays. Temporal-spatial gait data and 3D pelvic kinematics were analyzed and comparisons were made between UDW on a treadmill, TDW, and OGW. The observed step length, cadence, and walk ratio defined as the ratio of stride length to cadence were not significantly different between UDW and TDW. Additionally, the average magnitude of pelvic acceleration peak values along the anterior-posterior direction for each step and the associated standard deviations (variability) were not significantly different between the two modalities. The differences between OGW and UDW and TDW were mainly in swing time and cadence, as have been reported previously. Also, step lengths between OGW and TDW were different for 0.5 m/s and 1.5 m/s gait velocities, and walk ratio between OGS and UDW was

  16. A novel walking speed estimation scheme and its application to treadmill control for gait rehabilitation

    Directory of Open Access Journals (Sweden)

    Yoon Jungwon

    2012-08-01

    Full Text Available Abstract Background Virtual reality (VR technology along with treadmill training (TT can effectively provide goal-oriented practice and promote improved motor learning in patients with neurological disorders. Moreover, the VR + TT scheme may enhance cognitive engagement for more effective gait rehabilitation and greater transfer to over ground walking. For this purpose, we developed an individualized treadmill controller with a novel speed estimation scheme using swing foot velocity, which can enable user-driven treadmill walking (UDW to more closely simulate over ground walking (OGW during treadmill training. OGW involves a cyclic acceleration-deceleration profile of pelvic velocity that contrasts with typical treadmill-driven walking (TDW, which constrains a person to walk at a preset constant speed. In this study, we investigated the effects of the proposed speed adaptation controller by analyzing the gait kinematics of UDW and TDW, which were compared to those of OGW at three pre-determined velocities. Methods Ten healthy subjects were asked to walk in each mode (TDW, UDW, and OGW at three pre-determined speeds (0.5 m/s, 1.0 m/s, and 1.5 m/s with real time feedback provided through visual displays. Temporal-spatial gait data and 3D pelvic kinematics were analyzed and comparisons were made between UDW on a treadmill, TDW, and OGW. Results The observed step length, cadence, and walk ratio defined as the ratio of stride length to cadence were not significantly different between UDW and TDW. Additionally, the average magnitude of pelvic acceleration peak values along the anterior-posterior direction for each step and the associated standard deviations (variability were not significantly different between the two modalities. The differences between OGW and UDW and TDW were mainly in swing time and cadence, as have been reported previously. Also, step lengths between OGW and TDW were different for 0.5 m/s and 1.5 m/s gait velocities

  17. 23 CFR 635.115 - Agreement estimate.

    Science.gov (United States)

    2010-04-01

    ... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...

  18. On semiautomatic estimation of surface area

    DEFF Research Database (Denmark)

    Dvorak, J.; Jensen, Eva B. Vedel

    2013-01-01

    and the surfactor. For ellipsoidal particles, it is shown that the flower estimator is equal to the pivotal estimator based on support function measurements along four perpendicular rays. This result makes the pivotal estimator a powerful approximation to the flower estimator. In a simulation study of prolate....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...

  19. Estimating sediment discharge: Appendix D

    Science.gov (United States)

    Gray, John R.; Simões, Francisco J. M.

    2008-01-01

    Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with

  20. Estimating Foreign Exchange Reserve Adequacy

    Directory of Open Access Journals (Sweden)

    Abdul Hakim

    2013-04-01

    Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.

  1. Organ volume estimation using SPECT

    CERN Document Server

    Zaidi, H

    1996-01-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...

  2. Comments on mutagenesis risk estimation

    International Nuclear Information System (INIS)

    Russell, W.L.

    1976-01-01

    Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements

  3. Bayesian estimation in homodyne interferometry

    International Nuclear Information System (INIS)

    Olivares, Stefano; Paris, Matteo G A

    2009-01-01

    We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.

  4. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  5. Cost Estimates and Investment Decisions

    International Nuclear Information System (INIS)

    Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter

    2001-08-01

    When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)

  6. Location Estimation using Delayed Measurements

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus

    1998-01-01

    When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...

  7. Prior information in structure estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka

    2003-01-01

    Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf

  8. Radiation in space: risk estimates

    International Nuclear Information System (INIS)

    Fry, R.J.M.

    2002-01-01

    The complexity of radiation environments in space makes estimation of risks more difficult than for the protection of terrestrial population. In deep space the duration of the mission, position of the solar cycle, number and size of solar particle events (SPE) and the spacecraft shielding are the major determinants of risk. In low-earth orbit missions there are the added factors of altitude and orbital inclination. Different radiation qualities such as protons and heavy ions and secondary radiations inside the spacecraft such as neutrons of various energies, have to be considered. Radiation dose rates in space are low except for short periods during very large SPEs. Risk estimation for space activities is based on the human experience of exposure to gamma rays and to a lesser extent X rays. The doses of protons, heavy ions and neutrons are adjusted to take into account the relative biological effectiveness (RBE) of the different radiation types and thus derive equivalent doses. RBE values and factors to adjust for the effect of dose rate have to be obtained from experimental data. The influence of age and gender on the cancer risk is estimated from the data from atomic bomb survivors. Because of the large number of variables the uncertainties in the probability of the effects are large. Information needed to improve the risk estimates includes: (1) risk of cancer induction by protons, heavy ions and neutrons; (2) influence of dose rate and protraction, particularly on potential tissue effects such as reduced fertility and cataracts; and (3) possible effects of heavy ions on the central nervous system. Risk cannot be eliminated and thus there must be a consensus on what level of risk is acceptable. (author)

  9. Properties of estimated characteristic roots

    OpenAIRE

    Bent Nielsen; Heino Bohn Nielsen

    2008-01-01

    Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the δ-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...

  10. Recent estimates of capital flight

    OpenAIRE

    Claessens, Stijn; Naude, David

    1993-01-01

    Researchers and policymakers have in recent years paid considerable attention to the phenomenon of capital flight. Researchers have focused on four questions: What concept should be used to measure capital flight? What figure for capital flight will emerge, using this measure? Can the occurrence and magnitude of capital flight be explained by certain (economic) variables? What policy changes can be useful to reverse capital flight? The authors focus strictly on presenting estimates of capital...

  11. Effort Estimation in BPMS Migration

    OpenAIRE

    Drews, Christopher; Lantow, Birger

    2018-01-01

    Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...

  12. Reactor core performance estimating device

    International Nuclear Information System (INIS)

    Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.

    1995-01-01

    The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)

  13. Contact Estimation in Robot Interaction

    Directory of Open Access Journals (Sweden)

    Filippo D'Ippolito

    2014-07-01

    Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.

  14. Statistical estimation of process holdup

    International Nuclear Information System (INIS)

    Harris, S.P.

    1988-01-01

    Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs

  15. Abundance estimation and conservation biology

    Science.gov (United States)

    Nichols, J.D.; MacKenzie, D.I.

    2004-01-01

    Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention

  16. Abundance estimation and Conservation Biology

    Directory of Open Access Journals (Sweden)

    Nichols, J. D.

    2004-06-01

    Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that

  17. Estimating the Costs of Preventive Interventions

    Science.gov (United States)

    Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin

    2007-01-01

    The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…

  18. Thermodynamics and life span estimation

    International Nuclear Information System (INIS)

    Kuddusi, Lütfullah

    2015-01-01

    In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span

  19. The estimation of genetic divergence

    Science.gov (United States)

    Holmquist, R.; Conroy, T.

    1981-01-01

    Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.

  20. Nonparametric e-Mixture Estimation.

    Science.gov (United States)

    Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2016-12-01

    This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.

  1. Dose estimation by biological methods

    International Nuclear Information System (INIS)

    Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.

    1997-01-01

    The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)

  2. Stochastic estimation of electricity consumption

    International Nuclear Information System (INIS)

    Kapetanovic, I.; Konjic, T.; Zahirovic, Z.

    1999-01-01

    Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)

  3. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  4. Location Estimation of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kamil ŽIDEK

    2009-06-01

    Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.

  5. Estimation of the energy needs

    International Nuclear Information System (INIS)

    Ailleret

    1955-01-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr

  6. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  7. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  8. Applied parameter estimation for chemical engineers

    CERN Document Server

    Englezos, Peter

    2000-01-01

    Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam

  9. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  10. Note on demographic estimates 1979.

    Science.gov (United States)

    1979-01-01

    Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.

  11. Practical global oceanic state estimation

    Science.gov (United States)

    Wunsch, Carl; Heimbach, Patrick

    2007-06-01

    The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.

  12. LOD estimation from DORIS observations

    Science.gov (United States)

    Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs

    2016-04-01

    The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.

  13. CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE

    Directory of Open Access Journals (Sweden)

    Nino Serdarevic

    2012-10-01

    Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.

  14. The need to estimate risks

    International Nuclear Information System (INIS)

    Pochin, E.E.

    1980-01-01

    In an increasing number of situations, it is becoming possible to obtain and compare numerical estimates of the biological risks involved in different alternative sources of action. In some cases these risks are similar in kind, as for example when the risk of including fatal cancer of the breast or stomach by x-ray screening of a population at risk, is compared with the risk of such cancers proving fatal if not detected by a screening programme. In other cases in which it is important to attempt a comparison, the risks are dissimilar in type, as when the safety of occupations involving exposure to radiation or chemical carcinogens is compared with that of occupations in which the major risks are from lung disease or from accidental injury and death. Similar problems of assessing the relative severity of unlike effects occur in any attempt to compare the total biological harm associated with a given output of electricity derived from different primary fuel sources, with its contributions both of occupation and of public harm. In none of these instances is the numerical frequency of harmful effects alone an adequate measure of total biological detriment, nor is such detriment the only factor which should influence decisions. Estimations of risk appear important however, since otherwise public health decisions are likely to be made on more arbitrary grounds, and public opinion will continue to be affected predominantly by the type rather than also by the size of risk. (author)

  15. Variance function estimation for immunoassays

    International Nuclear Information System (INIS)

    Raab, G.M.; Thompson, R.; McKenzie, I.

    1980-01-01

    A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)

  16. Information and crystal structure estimation

    International Nuclear Information System (INIS)

    Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.

    1984-01-01

    The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)

  17. China academics feel a sting scientists fear crackdown jeopardized research strides

    CERN Multimedia

    Sanger, David E

    1989-01-01

    An international conference on HTS in China a failure after western speakers boycott the event and Chinese speakers forced to study speeches of the Chinese government leader instead of preparing papers (1 page).

  18. Stride-related rein tension patterns in walk and trot in the ridden horse

    NARCIS (Netherlands)

    Egenvall, Agneta; Roepstorff, Lars; Eisersiö, Marie; Rhodin, Marie; van Weeren, René

    2015-01-01

    BACKGROUND: The use of tack (equipment such as saddles and reins) and especially of bits because of rein tension resulting in pressure in the mouth is questioned because of welfare concerns. We hypothesised that rein tension patterns in walk and trot reflect general gait kinematics, but are also

  19. Oncolytic Herpes Simplex Viral Therapy: A Stride toward Selective Targeting of Cancer Cells.

    Science.gov (United States)

    Sanchala, Dhaval S; Bhatt, Lokesh K; Prabhavalkar, Kedar S

    2017-01-01

    Oncolytic viral therapy, which makes use of replication-competent lytic viruses, has emerged as a promising modality to treat malignancies. It has shown meaningful outcomes in both solid tumor and hematologic malignancies. Advancements during the last decade, mainly genetic engineering of oncolytic viruses have resulted in improved specificity and efficacy of oncolytic viruses in cancer therapeutics. Oncolytic viral therapy for treating cancer with herpes simplex virus-1 has been of particular interest owing to its range of benefits like: (a) large genome and power to infiltrate in the tumor, (b) easy access to manipulation with the flexibility to insert multiple transgenes, (c) infecting majority of the malignant cell types with quick replication in the infected cells and (d) as Anti-HSV agent to terminate HSV replication. This review provides an exhaustive list of oncolytic herpes simplex virus-1 along with their genetic alterations. It also encompasses the major developments in oncolytic herpes simplex-1 viral therapy and outlines the limitations and drawbacks of oncolytic herpes simplex viral therapy.

  20. Elemental characterization of marijuana (cannabis sativa) as a stride in the isolation of its active ingredients

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Y A; Jaoji, A A [Centre for Energy Research and Training, Ahmadu Bello University P.M.B. 1014, Zaria (Nigeria); Olalekan, Y S [Department of Physics, University of Ilorin, (Nigeria)

    2010-05-28

    Seed, stem and leaves samples of Marijuana (Cannabis sativa) popularly called Indian Hemp available in northern Nigeria were analyzed for trace amounts of Mg, Al, Ca, Ti, Mn, Na, Br, La, Yb, Cr, Fe, Zn, and Ba using Instrumental Neutron Activation Analysis. Sample sizes of roughly 300mg irradiated for five minutes (short irradiation) and six hours (long irradiation), with decay times of 7 minutes, 10,000 minutes and 26,000 minutes for short, medium and long-lived nuclides respectively. Counting times for ten minutes (short-lived nuclides), 1,800 minutes (medium-lived nuclides) and 36,000 minutes (long-lived nuclides) yielded detection limits between 0.05 - 0.09mug/g. For comparative study, refined tobacco produced by a tobacco company operating in northern Nigeria were characterized together with the marijuana-which is usually smoked raw with leaves stem and seed packed together. The results obtained shows that both the refined tobacco and the raw marijuana have high c oncentration of Ca, Mg, Al and Mn and low values of Na, Br and La. However, marijuana was found to have heavy elements in abundance compared to the refined tobacco, with Zn = 20.5 mug/g and Cr = 14.3mug/g recording the highest values among the heavy elements detected. This is a sharp difference between the two since the values of heavy elements obtained for the refined tobacco are even below detection limits. Quality Control and Quality Assurance was tested using certified reference material obtained from NIST (Tomato Leaves).

  1. Moving Along: In biomechanics, rehabilitation engineering, and movement analysis, Italian researchers are making great strides.

    Science.gov (United States)

    Gugliellmelli, Eugenio; Micera, Silvestro; Migliavacca, Francesco; Pedotti, Antonio

    2015-01-01

    In Italy, biomechanics research and the analysis of human and animal movement have had a very long history, beginning with the exceptional pioneering work of Leonardo da Vinci. In 1489, da Vinci began investigating human anatomy, including an examination of human tendons, muscles, and the skeletal system. He continued this line of inquiry later in life, identifying what he called "the four powers--movement, weight, force, and percussion"--and how he thought they worked in the human body. His approach, by the way, was very modern--analyzing nature through anatomy, developing models for interpretation, and transferring this knowledge to bio-inspired machines.

  2. Current strides in AAV-derived vectors and SIN channels further ...

    African Journals Online (AJOL)

    A.S. Odiba

    restored, hematopoietic stem cells has been used to terminate incurable blood ... gene and cell therapy approach are founded on either ex vivo gene incorporation into ..... generating integration-deficient lentiviral vectors (IDLV), which on.

  3. Elemental characterization of marijuana (cannabis sativa) as a stride in the isolation of its active ingredients

    International Nuclear Information System (INIS)

    Ahmed, Y.A.; Jaoji, A.A.; Olalekan, Y.S.

    2010-01-01

    Seed, stem and leaves samples of Marijuana (Cannabis sativa) popularly called Indian Hemp available in northern Nigeria were analyzed for trace amounts of Mg, Al, Ca, Ti, Mn, Na, Br, La, Yb, Cr, Fe, Zn, and Ba using Instrumental Neutron Activation Analysis. Sample sizes of roughly 300mg irradiated for five minutes (short irradiation) and six hours (long irradiation), with decay times of 7 minutes, 10,000 minutes and 26,000 minutes for short, medium and long-lived nuclides respectively. Counting times for ten minutes (short-lived nuclides), 1,800 minutes (medium-lived nuclides) and 36,000 minutes (long-lived nuclides) yielded detection limits between 0.05 - 0.09μg/g. For comparative study, refined tobacco produced by a tobacco company operating in northern Nigeria were characterized together with the marijuana-which is usually smoked raw with leaves stem and seed packed together. The results obtained shows that both the refined tobacco and the raw marijuana have high c oncentration of Ca, Mg, Al and Mn and low values of Na, Br and La. However, marijuana was found to have heavy elements in abundance compared to the refined tobacco, with Zn = 20.5 μg/g and Cr = 14.3μg/g recording the highest values among the heavy elements detected. This is a sharp difference between the two since the values of heavy elements obtained for the refined tobacco are even below detection limits. Quality Control and Quality Assurance was tested using certified reference material obtained from NIST (Tomato Leaves).

  4. Identification of Biomolecular Building Blocks by Recognition Tunneling: Stride towards Nanopore Sequencing of Biomolecules

    Science.gov (United States)

    Sen, Suman

    DNA, RNA and Protein are three pivotal biomolecules in human and other organisms, playing decisive roles in functionality, appearance, diseases development and other physiological phenomena. Hence, sequencing of these biomolecules acquires the prime interest in the scientific community. Single molecular identification of their building blocks can be done by a technique called Recognition Tunneling (RT) based on Scanning Tunneling Microscope (STM). A single layer of specially designed recognition molecule is attached to the STM electrodes, which trap the targeted molecules (DNA nucleoside monophosphates, RNA nucleoside monophosphates or amino acids) inside the STM nanogap. Depending on their different binding interactions with the recognition molecules, the analyte molecules generate stochastic signal trains accommodating their "electronic fingerprints". Signal features are used to detect the molecules using a machine learning algorithm and different molecules can be identified with significantly high accuracy. This, in turn, paves the way for rapid, economical nanopore sequencing platform, overcoming the drawbacks of Next Generation Sequencing (NGS) techniques. To read DNA nucleotides with high accuracy in an STM tunnel junction a series of nitrogen-based heterocycles were designed and examined to check their capabilities to interact with naturally occurring DNA nucleotides by hydrogen bonding in the tunnel junction. These recognition molecules are Benzimidazole, Imidazole, Triazole and Pyrrole. Benzimidazole proved to be best among them showing DNA nucleotide classification accuracy close to 99%. Also, Imidazole reader can read an abasic monophosphate (AP), a product from depurination or depyrimidination that occurs 10,000 times per human cell per day. In another study, I have investigated a new universal reader, 1-(2-mercaptoethyl)pyrene (Pyrene reader) based on stacking interactions, which should be more specific to the canonical DNA nucleosides. In addition, Pyrene reader showed higher DNA base-calling accuracy compare to Imidazole reader, the workhorse in our previous projects. In my other projects, various amino acids and RNA nucleoside monophosphates were also classified with significantly high accuracy using RT. Twenty naturally occurring amino acids and various RNA nucleosides (four canonical and two modified) were successfully identified. Thus, we envision nanopore sequencing biomolecules using Recognition Tunneling (RT) that should provide comprehensive betterment over current technologies in terms of time, chemical and instrumental cost and capability of de novo sequencing.

  5. Myanmar exploration hitting stride on 1989-90 licensing round blocks

    International Nuclear Information System (INIS)

    Khin, J.A.; Johnston, D.

    1992-01-01

    This paper reports that following licensing efforts in 1989-90, Myanmar has been gearing up with activity both onshore and offshore. The industry gave a strong response to the first round of exploration licensing. The license awards in the first round carried fairly aggressive work commitments in terms of both dollars and timing. Work commitments on each of the first nine blocks ranged from $12 million to $70 million for each block. Most companies committed to spudding their first wells within the first 12-14 months. The drilling results are starting to come in. Although no significant oil discovery has been made yet, the country expects to speed up its exploration activities in the next few years. Following the first round of licensing onshore, Myanmar Oil and Gas Enterprise (MOGE), the national oil company, is negotiating terms for offshore blocks as well as additional onshore blocks for improved oil recovery (IOR) and rehabilitation/redevelopment rights for existing fields

  6. Striding networks of inter-process communication based on TCP/IP protocol

    International Nuclear Information System (INIS)

    Lou Yi; Chen Haichun; Qian Jing; Chen Zhuomin

    2004-01-01

    A mode of process/thread communication between QNX and WINDOWS operating systems in isomerous computers is described. It is proved in practice to be an entirely feasible mode with high efficiency and reliability. A socket created by Socket API is used to communicate between two operating systems. (authors)

  7. 78 FR 61358 - Mylan, Inc., Agila Specialties Global Pte. Limited, Agila Specialties Private Limited and Strides...

    Science.gov (United States)

    2013-10-03

    ... following the instructions on the web-based form. If you prefer to file your comment on paper, mail or... instructions on the web-based form. If this Notice appears at http://www.regulations.gov/#!home . you also may...--Mylan, [[Page 61360

  8. Striding Out in the Opposite Direction: The Journalism Career of George Samuel Schuyler, Iconoclast.

    Science.gov (United States)

    Fleener, Nickieann

    Written in a biographical framework, an overview of George Samuel Schuyler's life and his career in journalism is presented in this paper. The preparation that Schuyler had for his career, the positions within journalism he held, and the major writing projects he undertook are discussed. The question of whether there is anything unique or…

  9. Replacement of a vessel head, an operation which today gets easily into its stride

    International Nuclear Information System (INIS)

    Mardon, P.; Chaumont, J.C.; Lambiotte, P.

    1995-01-01

    In 1992, one year after the detection of a leak in a vessel head of the Electricite de France (EDF) Bugey 4 reactor, the head was replaced by the Framatome-Jeumont Industrie Group. Today, this group, which has developed new methods and new tools to optimize the cost, the time-delay and the dosimetry of this kind of intervention, has performed 11 additional replacements, two of which on 1300 MWe power units. This paper describes step by step the successive operations required for a complete vessel head replacement, including the testing of safety systems before starting up the reactor. (J.S.). 7 photos

  10. Marine microbiology: A glimpse of the strides in the Indian and the global arena

    Digital Repository Service at National Institute of Oceanography (India)

    LokaBharathi, P.A.; Nair, S.; Chandramohan, D.

    to understand the form and function of bacteria that are responsible for mediating the various processes in the sea. The field evolves from culture based ecology to direct quantification of these in different marine niches. Insights into some...

  11. Current strides in AAV-derived vectors and SIN channels further ...

    African Journals Online (AJOL)

    A.S. Odiba

    ... clinical trials in record. This was not without associated side effects or adverse reactions .... incurable blood cancers (though expensive as well as incurring high risks) ..... The various opportunities like the same individual donor and recipient ...

  12. Sun Grant Initiative : great strides toward a sustainable and more energy-independent future

    Science.gov (United States)

    2014-09-01

    The Sun Grant Initiative publication, developed by the U.S. Department of Transportation, offers a glimpse of how the Sun Grant Initiative Centers are advancing alternative fuels research. Transportation plays a significant role in biofuels research,...

  13. Concussion Assessment in California Community College Football: Athletic Trainers' Strides toward a Safer Return to Play

    Science.gov (United States)

    Chinn, Nancy Resendes

    2010-01-01

    The purpose of this mixed method study was to compare current practices of athletic trainers in the management of concussion in football at California Community Colleges (CCC) with the concussion management guidelines set forth by the National Athletic Trainers Association (NATA). The study also set out to gain understanding of why some athletic…

  14. PHAZE, Parametric Hazard Function Estimation

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate

  15. Bayesian estimation methods in metrology

    International Nuclear Information System (INIS)

    Cox, M.G.; Forbes, A.B.; Harris, P.M.

    2004-01-01

    In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods

  16. Residual risk over-estimated

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de

  17. Neutron background estimates in GESA

    Directory of Open Access Journals (Sweden)

    Fernandes A.C.

    2014-01-01

    Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.

  18. Mergers as an Omega estimator

    International Nuclear Information System (INIS)

    Carlberg, R.G.

    1990-01-01

    The redshift dependence of the fraction of galaxies which are merging or strongly interacting is a steep function of Omega and depends on the ratio of the cutoff velocity for interactions to the pairwise velocity dispersion. For typical galaxies the merger rate is shown to vary as (1 + z)exp m, where m is about 4.51 (Omega)exp 0.42, for Omega near 1 and a CDM-like cosmology. The index m has a relatively weak dependence on the maximum merger velocity, the mass of the galaxy, and the background cosmology, for small variations around a cosmology with a low redshift, z of about 2, of galaxy formation. Estimates of m from optical and IRAS galaxies have found that m is about 3-4, but with very large uncertainties. If quasar evolution follows the evolution of galaxy merging and m for quasars is greater than 4, then Omega is greater than 0.8. 21 refs

  19. 2007 Estimated International Energy Flows

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Belles, R D; Simon, A J

    2011-03-10

    An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.

  20. Data Handling and Parameter Estimation

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist

    2016-01-01

    ,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...

  1. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  2. Effort Estimation in BPMS Migration

    Directory of Open Access Journals (Sweden)

    Christopher Drews

    2018-04-01

    Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.

  3. Supplemental report on cost estimates'

    International Nuclear Information System (INIS)

    1992-01-01

    The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis

  4. Age Estimation in Forensic Sciences

    Science.gov (United States)

    Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.

    2010-01-01

    Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905

  5. Runoff estimation in residencial area

    Directory of Open Access Journals (Sweden)

    Meire Regina de Almeida Siqueira

    2013-12-01

    Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.

  6. Estimated status 2006-2015

    International Nuclear Information System (INIS)

    2003-01-01

    According to article 6 of the French law from February 10, 2000 relative to the modernization and development of the electric public utility, the manager of the public power transportation grid (RTE) has to produce, at least every two years and under the control of the French government, a pluri-annual estimated status. Then, the energy ministry uses this status to prepare the pluri-annual planning of power production investments. The estimated status aims at establishing a medium- and long-term diagnosis of the balance between power supply and demand and at evaluating the new production capacity needs to ensure a durable security of power supplies. The hypotheses relative to the power consumption and to the evolution of the power production means and trades are presented in chapters 2 to 4. Chapter 5 details the methodology and modeling principles retained for the supply-demand balance simulations. Chapter 6 presents the probabilistic simulation results at the 2006, 2010 and 2015 prospects and indicates the volumes of reinforcement of the production parks which would warrant an acceptable level of security. Chapter 7 develops the critical problem of winter demand peaks and evokes the possibilities linked with demand reduction, market resources and use of the existing park. Finally, chapter 8 makes a synthesis of the technical conclusions and recalls the determining hypotheses that have been retained. The particular situations of western France, of the Mediterranean and Paris region, and of Corsica and overseas territories are examined in chapter 9. The simulation results for all consumption-production scenarios and the wind-power production data are presented in appendixes. (J.S.)

  7. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  8. Estimation of Poverty in Small Areas

    Directory of Open Access Journals (Sweden)

    Agne Bikauskaite

    2014-12-01

    Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.

  9. Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...

  10. On the relation between S-Estimators and M-Estimators of multivariate location and covariance

    NARCIS (Netherlands)

    Lopuhaa, H.P.

    1987-01-01

    We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of

  11. Estimation of the energy needs; Estimation des besoins energetiques

    Energy Technology Data Exchange (ETDEWEB)

    Ailleret, [Electricite de France (EDF), Dir. General des Etudes de Recherches, 75 - Paris (France)

    1955-07-01

    The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [French] Le present rapport dresse le bilan sur la consommation energetique actuelle et previsionnelle pour les vingt prochaines annees. L'energie actuelle provient principalement consommation de charbon, de produits petroliers et d'energie electrique essentiellement hydraulique. l'evolution du marche provient essentielement du developpement l'activite industriel et de nouvelles applications tributaire du cout et de la distribution de l'energie electrique. A cet effet, l'energie atomique offre de bonne perspectives industrielles en complement des sources actuelles energetiques afin de repondre aux nouveaux besoins. (M.B.)

  12. How Valid are Estimates of Occupational Illness?

    Science.gov (United States)

    Hilaski, Harvey J.; Wang, Chao Ling

    1982-01-01

    Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)

  13. State estimation for a hexapod robot

    CSIR Research Space (South Africa)

    Lubbe, Estelle

    2015-09-01

    Full Text Available This paper introduces a state estimation methodology for a hexapod robot that makes use of proprioceptive sensors and a kinematic model of the robot. The methodology focuses on providing reliable full pose state estimation for a commercially...

  14. Access Based Cost Estimation for Beddown Analysis

    National Research Council Canada - National Science Library

    Pennington, Jasper E

    2006-01-01

    The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...

  15. Estimated annual economic loss from organ condemnation ...

    African Journals Online (AJOL)

    as a basis for the analysis of estimation of the economic significance of bovine .... percent involvement of each organ were used in the estimation of the financial loss from organ .... DVM thesis, Addis Ababa University, Faculty of Veterinary.

  16. Velocity Estimate Following Air Data System Failure

    National Research Council Canada - National Science Library

    McLaren, Scott A

    2008-01-01

    .... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...

  17. On Estimating Quantiles Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Berger Yves G.

    2015-03-01

    Full Text Available We propose a transformation-based approach for estimating quantiles using auxiliary information. The proposed estimators can be easily implemented using a regression estimator. We show that the proposed estimators are consistent and asymptotically unbiased. The main advantage of the proposed estimators is their simplicity. Despite the fact the proposed estimators are not necessarily more efficient than their competitors, they offer a good compromise between accuracy and simplicity. They can be used under single and multistage sampling designs with unequal selection probabilities. A simulation study supports our finding and shows that the proposed estimators are robust and of an acceptable accuracy compared to alternative estimators, which can be more computationally intensive.

  18. On Estimation and Testing for Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2013-01-01

    Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research

  19. Estimating the NIH efficient frontier.

    Directory of Open Access Journals (Sweden)

    Dimitrios Bisias

    Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  20. Estimating the NIH efficient frontier.

    Science.gov (United States)

    Bisias, Dimitrios; Lo, Andrew W; Watkins, James F

    2012-01-01

    The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of

  1. Estimating the NIH Efficient Frontier

    Science.gov (United States)

    2012-01-01

    Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent

  2. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  3. Fast and Statistically Efficient Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom

    2016-01-01

    Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....

  4. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  5. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  6. Carleman estimates for some elliptic systems

    International Nuclear Information System (INIS)

    Eller, M

    2008-01-01

    A Carleman estimate for a certain first order elliptic system is proved. The proof is elementary and does not rely on pseudo-differential calculus. This estimate is used to prove Carleman estimates for the isotropic Lame system as well as for the isotropic Maxwell system with C 1 coefficients

  7. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  8. Estimating uncertainty of data limited stock assessments

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro

    2017-01-01

    -limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...

  9. Another look at the Grubbs estimators

    KAUST Repository

    Lombard, F.; Potgieter, C.J.

    2012-01-01

    of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit

  10. Load Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune

    2007-01-01

    When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...

  11. Non-Parametric Estimation of Correlation Functions

    DEFF Research Database (Denmark)

    Brincker, Rune; Rytter, Anders; Krenk, Steen

    In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...

  12. Bayesian techniques for surface fuel loading estimation

    Science.gov (United States)

    Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell

    2016-01-01

    A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...

  13. Estimation of exposed dose, 1

    International Nuclear Information System (INIS)

    Okajima, Shunzo

    1976-01-01

    Radioactive atomic fallouts in Nishiyama district of Nagasaki Prefecture are reported on the basis of the survey since 1969. In 1969, the amount of 137 Cs in the body of 50 inhabitants in Nishiyama district was measured using human counter, and was compared with that of non-exposured group. The average value of 137 Cs (pCi/kg) was higher in inhabitants in Nishiyama district (38.5 in men and 24.9 in females) than in the controls (25.5 in men and 14.9 in females). The resurvey in 1971 showed that the amount of 137 Cs was decreased to 76% in men and 60% in females. When the amount of 137 Cs in the body was calculated from the chemical analysis of urine, it was 29.0 +- 8.2 in men and 29.4 +- 26.2 in females in Nishiyama district, and 29.9 +- 8.2 in men and 29.4 +- 11.7 in females in the controls. The content of 137 Cs in soils and crops (potato etc.) was higher in Nishiyama district than in the controls. When the internal exposure dose per year was calculated from the amount of 137 Cs in the body in 1969, it was 0.29 mrad/year in men and 0.19 mrad/year in females. Finally, the internal exposure dose immediately after the explosion was estimated. (Serizawa, K.)

  14. Inflation and cosmological parameter estimation

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, J.

    2007-05-15

    In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)

  15. Quantum rewinding via phase estimation

    Science.gov (United States)

    Tabia, Gelo Noel

    2015-03-01

    In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.

  16. Global Warming Estimation from MSU

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, Robert, Jr.

    1999-01-01

    In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.

  17. Estimates of LLEA officer availability

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1978-05-01

    One element in the Physical Protection of Nuclear Material in Transit Program is a determination of the number of local law enforcement agency (LLEA) officers available to respond to an attack upon a special nuclear material (SNM) carrying convoy. A computer model, COPS, has been developed at Sandia Laboratories to address this problem. Its purposes are to help identify to the SNM shipper areas along a route which may have relatively low police coverage and to aid in the comparison of alternate routes to the same location. Data bases used in COPS include population data from the Bureau of Census and police data published by the FBI. Police are assumed to be distributed in proportion to the population, with adjustable weighting factors. Example results illustrating the model's capabilities are presented for two routes between Los Angeles, CA, and Denver, CO, and for two routes between Columbia, SC, and Syracuse, NY. The estimated police distribution at points along the route is presented. Police availability as a function of time is modeled based on the time-dependent characteristics of a trip. An example demonstrating the effects of jurisdictional restrictions on the size of the response force is given. Alternate routes between two locations are compared by means of cumulative plots

  18. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  19. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  20. Indirect estimators in US federal programs

    CERN Document Server

    1996-01-01

    In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.

  1. Parameter Estimation in Continuous Time Domain

    Directory of Open Access Journals (Sweden)

    Gabriela M. ATANASIU

    2016-12-01

    Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.

  2. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  3. A Gaussian IV estimator of cointegrating relations

    DEFF Research Database (Denmark)

    Bårdsen, Gunnar; Haldrup, Niels

    2006-01-01

    In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....

  4. Optimal estimation of the optomechanical coupling strength

    Science.gov (United States)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  5. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  6. Budget estimates. Fiscal year 1998

    International Nuclear Information System (INIS)

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC's mission, therefore, is to regulate the Nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC's FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC's Salaraies and Expenses appropriation for $476,500,000, and the other is NRC's Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC's Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC's Salaries and Expenses and NRC's Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury

  7. Budget estimates. Fiscal year 1998

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.

  8. Optimal estimations of random fields using kriging

    International Nuclear Information System (INIS)

    Barua, G.

    2004-01-01

    Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance

  9. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  10. Robust bearing estimation for 3-component stations

    International Nuclear Information System (INIS)

    CLAASSEN, JOHN P.

    2000-01-01

    A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings

  11. Iterative Estimation in Turbo Equalization Process

    Directory of Open Access Journals (Sweden)

    MORGOS Lucian

    2014-05-01

    Full Text Available This paper presents the iterative estimation in turbo equalization process. Turbo equalization is the process of reception in which equalization and decoding are done together, not as separate processes. For the equalizer to work properly, it must receive before equalization accurate information about the value of the channel impulse response. This estimation of channel impulse response is done by transmission of a training sequence known at reception. Knowing both the transmitted and received sequence, it can be calculated estimated value of the estimated the channel impulse response using one of the well-known estimation algorithms. The estimated value can be also iterative recalculated based on the sequence data available at the output of the channel and estimated sequence data coming from turbo equalizer output, thereby refining the obtained results.

  12. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  13. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  14. Atmospheric Turbulence Estimates from a Pulsed Lidar

    Science.gov (United States)

    Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.

    2013-01-01

    Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.

  15. Cosmochemical Estimates of Mantle Composition

    Science.gov (United States)

    Palme, H.; O'Neill, H. St. C.

    2003-12-01

    , and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush

  16. Entropy estimates of small data sets

    Energy Technology Data Exchange (ETDEWEB)

    Bonachela, Juan A; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hinrichsen, Haye [Fakultaet fuer Physik und Astronomie, Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany)

    2008-05-23

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  17. Entropy estimates of small data sets

    International Nuclear Information System (INIS)

    Bonachela, Juan A; Munoz, Miguel A; Hinrichsen, Haye

    2008-01-01

    Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)

  18. Relative Pose Estimation Algorithm with Gyroscope Sensor

    Directory of Open Access Journals (Sweden)

    Shanshan Wei

    2016-01-01

    Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.

  19. Nondestructive, stereological estimation of canopy surface area

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.

    2010-01-01

    We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...

  20. Resilient Distributed Estimation Through Adversary Detection

    Science.gov (United States)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  1. ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS

    Directory of Open Access Journals (Sweden)

    muhammad zahid rashid

    2011-04-01

    Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR,  moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes

  2. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  3. Science yield estimation for AFTA coronagraphs

    Science.gov (United States)

    Traub, Wesley A.; Belikov, Ruslan; Guyon, Olivier; Kasdin, N. Jeremy; Krist, John; Macintosh, Bruce; Mennesson, Bertrand; Savransky, Dmitry; Shao, Michael; Serabyn, Eugene; Trauger, John

    2014-08-01

    We describe the algorithms and results of an estimation of the science yield for five candidate coronagraph designs for the WFIRST-AFTA space mission. The targets considered are of three types, known radial-velocity planets, expected but as yet undiscovered exoplanets, and debris disks, all around nearby stars. The results of the original estimation are given, as well as those from subsequently updated designs that take advantage of experience from the initial estimates.

  4. Estimating Elevation Angles From SAR Crosstalk

    Science.gov (United States)

    Freeman, Anthony

    1994-01-01

    Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.

  5. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  6. Application of spreadsheet to estimate infiltration parameters

    OpenAIRE

    Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed

    2016-01-01

    Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...

  7. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  8. State energy data report 1994: Consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.

  9. Self-learning estimation of quantum states

    International Nuclear Information System (INIS)

    Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.

    2002-01-01

    We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence

  10. Estimation of Correlation Functions by Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....

  11. State energy data report 1994: Consumption estimates

    International Nuclear Information System (INIS)

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included

  12. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  13. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  14. Outer planet probe cost estimates: First impressions

    Science.gov (United States)

    Niehoff, J.

    1974-01-01

    An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.

  15. Application of spreadsheet to estimate infiltration parameters

    Directory of Open Access Journals (Sweden)

    Mohammad Zakwan

    2016-09-01

    Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.

  16. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  17. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  18. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)

    2013-03-10

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  19. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  20. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  1. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    2015-01-01

    From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  2. OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS

    International Nuclear Information System (INIS)

    Johnson, M. D.; Chou, H. H.; Gwinn, C. R.

    2013-01-01

    Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.

  3. Load Estimation from Natural input Modal Analysis

    DEFF Research Database (Denmark)

    Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández

    2005-01-01

    One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...

  4. Towards Greater Harmonisation of Decommissioning Cost Estimates

    International Nuclear Information System (INIS)

    O'Sullivan, Patrick; ); Laraia, Michele; ); LaGuardia, Thomas S.

    2010-01-01

    The NEA Decommissioning Cost Estimation Group (DCEG), in collaboration with the IAEA Waste Technology Section and the EC Directorate-General for Energy and Transport, has recently studied cost estimation practices in 12 countries - Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Slovakia, Spain, Sweden, the United Kingdom and the United States. Its findings are to be published in an OECD/NEA report entitled Cost Estimation for Decommissioning: An International Overview of Cost Elements, Estimation Practices and Reporting Requirements. This booklet highlights the findings contained in the full report. (authors)

  5. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  6. Cost Estimating Handbook for Environmental Restoration

    International Nuclear Information System (INIS)

    1993-01-01

    Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals

  7. L’estime de soi : un cas particulier d’estime sociale ?

    OpenAIRE

    Santarelli, Matteo

    2016-01-01

    Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...

  8. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  9. The Problems of Multiple Feedback Estimation.

    Science.gov (United States)

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  10. Spectral Estimation by the Random Dec Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, Jacob L.; Krenk, Steen

    1990-01-01

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  11. Spectral Estimation by the Random DEC Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Jensen, J. Laigaard; Krenk, S.

    This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...

  12. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  13. Fuel Burn Estimation Using Real Track Data

    Science.gov (United States)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  14. Uncertainty Measures of Regional Flood Frequency Estimators

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik

    1995-01-01

    Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...

  15. Multisensor simultaneous vehicle tracking and shape estimation

    NARCIS (Netherlands)

    Elfring, J.; Appeldoorn, R.P.W.; Kwakkernaat, M.R.J.A.E.

    2016-01-01

    This work focuses on vehicle automation applications that require both the estimation of kinematic and geometric information of surrounding vehicles, e.g., automated overtaking or merging. Rather then using one sensor that is able to estimate a vehicle's geometry from each sensor frame, e.g., a

  16. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  17. Estimation of biochemical variables using quantumbehaved particle ...

    African Journals Online (AJOL)

    To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...

  18. Estimated water use in Puerto Rico, 2010

    Science.gov (United States)

    Molina-Rivera, Wanda L.

    2014-01-01

    Water-use data were aggregated for the 78 municipios of the Commonwealth of Puerto Rico for 2010. Five major offstream categories were considered: public-supply water withdrawals and deliveries, domestic and industrial self-supplied water use, crop-irrigation water use, and thermoelectric-power freshwater use. One instream water-use category also was compiled: power-generation instream water use (thermoelectric saline withdrawals and hydroelectric power). Freshwater withdrawals for offstream use from surface-water [606 million gallons per day (Mgal/d)] and groundwater (118 Mgal/d) sources in Puerto Rico were estimated at 724 million gallons per day. The largest amount of freshwater withdrawn was by public-supply water facilities estimated at 677 Mgal/d. Public-supply domestic water use was estimated at 206 Mgal/d. Fresh groundwater withdrawals by domestic self-supplied users were estimated at 2.41 Mgal/d. Industrial self-supplied withdrawals were estimated at 4.30 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 38.2 Mgal/d, or approximately 5 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 556 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,262 Mgal/d.

  19. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  20. Uranium mill tailings and risk estimation

    International Nuclear Information System (INIS)

    Marks, S.

    1984-04-01

    Work done in estimating projected health effects for persons exposed to mill tailings at vicinity properties is described. The effect of the reassessment of exposures at Hiroshima and Nagasaki on the risk estimates for gamma radiation is discussed. A presentation of current results in the epidemiological study of Hanford workers is included. 2 references

  1. New U.S. Foodborne Illness Estimate

    Centers for Disease Control (CDC) Podcasts

    This podcast discusses CDC's report on new estimates of illnesses due to eating contaminated food in the United States. Dr. Elaine Scallan, assistant professor at the University of Colorado and former lead of the CDCs FoodNet surveillance system, shares the details from the first new comprehensive estimates of foodborne illness in the U.S. since 1999.

  2. Estimating light-vehicle sales in Turkey

    Directory of Open Access Journals (Sweden)

    Ufuk Demiroğlu

    2016-09-01

    Full Text Available This paper is motivated by the surprising rapid growth of new light-vehicle sales in Turkey in 2015. Domestic sales grew 25%, dramatically surpassing the industry estimates of around 8%. Our approach is to inform the sales trend estimate with the information obtained from the light-vehicle stock (the number of cars and light trucks officially registered in the country, and the scrappage data. More specifically, we improve the sales trend estimate by estimating the trend of its stock. Using household data, we show that an important reason for the rapid sales growth is that an increasing share of household budgets is spent on automobile purchases. The elasticity of light-vehicle sales to cyclical changes in aggregate demand is high and robust; its estimates are around 6 with a standard deviation of about 0.5. The price elasticity of light-vehicle sales is estimated to be about 0.8, but the estimates are imprecise and not robust. We estimate the trend level of light-vehicle sales to be roughly 7 percent of the existing stock. A remarkable out-of-sample forecast performance is obtained for horizons up to nearly a decade by a regression equation using only a cyclical gap measure, the time trend and obvious policy dummies. Various specifications suggest that the strong 2015 growth of light-vehicle sales was predictable in late 2014.

  3. TP89 - SIRZ Decomposition Spectral Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  4. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  5. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  6. Velocity Estimation in Medical Ultrasound [Life Sciences

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Villagómez Hoyos, Carlos Armando; Holbek, Simon

    2017-01-01

    This article describes the application of signal processing in medical ultrasound velocity estimation. Special emphasis is on the relation among acquisition methods, signal processing, and estimators employed. The description spans from current clinical systems for one-and two-dimensional (1-D an...

  7. Varieties of Quantity Estimation in Children

    Science.gov (United States)

    Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco

    2015-01-01

    In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…

  8. Estimating functions for inhomogeneous Cox processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....

  9. Kalman filter to update forest cover estimates

    Science.gov (United States)

    Raymond L. Czaplewski

    1990-01-01

    The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...

  10. Linearized motion estimation for articulated planes.

    Science.gov (United States)

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  11. Body composition estimation from selected slices

    DEFF Research Database (Denmark)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total...

  12. Differences between carbon budget estimates unravelled

    NARCIS (Netherlands)

    Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Vuuren, Van Detlef P.; Riahi, Keywan; Allen, Myles; Knutti, Reto

    2016-01-01

    Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust

  13. Differences between carbon budget estimates unravelled

    NARCIS (Netherlands)

    Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Van Vuuren, Detlef P.|info:eu-repo/dai/nl/11522016X; Riahi, Keywan; Allen, Myles; Knutti, Reto

    2016-01-01

    Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust

  14. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  15. Estimates of Uncertainty around the RBA's Forecasts

    OpenAIRE

    Peter Tulip; Stephanie Wallace

    2012-01-01

    We use past forecast errors to construct confidence intervals and other estimates of uncertainty around the Reserve Bank of Australia's forecasts of key macroeconomic variables. Our estimates suggest that uncertainty about forecasts is high. We find that the RBA's forecasts have substantial explanatory power for the inflation rate but not for GDP growth.

  16. Cost-estimating for commercial digital printing

    Science.gov (United States)

    Keif, Malcolm G.

    2007-01-01

    The purpose of this study is to document current cost-estimating practices used in commercial digital printing. A research study was conducted to determine the use of cost-estimating in commercial digital printing companies. This study answers the questions: 1) What methods are currently being used to estimate digital printing? 2) What is the relationship between estimating and pricing digital printing? 3) To what extent, if at all, do digital printers use full-absorption, all-inclusive hourly rates for estimating? Three different digital printing models were identified: 1) Traditional print providers, who supplement their offset presswork with digital printing for short-run color and versioned commercial print; 2) "Low-touch" print providers, who leverage the power of the Internet to streamline business transactions with digital storefronts; 3) Marketing solutions providers, who see printing less as a discrete manufacturing process and more as a component of a complete marketing campaign. Each model approaches estimating differently. Understanding and predicting costs can be extremely beneficial. Establishing a reliable system to estimate those costs can be somewhat challenging though. Unquestionably, cost-estimating digital printing will increase in relevance in the years ahead, as margins tighten and cost knowledge becomes increasingly more critical.

  17. Estimating Gender Wage Gaps: A Data Update

    Science.gov (United States)

    McDonald, Judith A.; Thornton, Robert J.

    2016-01-01

    In the authors' 2011 "JEE" article, "Estimating Gender Wage Gaps," they described an interesting class project that allowed students to estimate the current gender earnings gap for recent college graduates using data from the National Association of Colleges and Employers (NACE). Unfortunately, since 2012, NACE no longer…

  18. Regression Equations for Birth Weight Estimation using ...

    African Journals Online (AJOL)

    In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...

  19. Estimating Loan-to-value Distributions

    DEFF Research Database (Denmark)

    Korteweg, Arthur; Sørensen, Morten

    2016-01-01

    We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...

  20. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  1. A method of estimating log weights.

    Science.gov (United States)

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  2. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  3. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  4. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  5. Introduction to quantum-state estimation

    CERN Document Server

    Teo, Yong Siah

    2016-01-01

    Quantum-state estimation is an important field in quantum information theory that deals with the characterization of states of affairs for quantum sources. This book begins with background formalism in estimation theory to establish the necessary prerequisites. This basic understanding allows us to explore popular likelihood- and entropy-related estimation schemes that are suitable for an introductory survey on the subject. Discussions on practical aspects of quantum-state estimation ensue, with emphasis on the evaluation of tomographic performances for estimation schemes, experimental realizations of quantum measurements and detection of single-mode multi-photon sources. Finally, the concepts of phase-space distribution functions, which compatibly describe these multi-photon sources, are introduced to bridge the gap between discrete and continuous quantum degrees of freedom. This book is intended to serve as an instructive and self-contained medium for advanced undergraduate and postgraduate students to gra...

  6. Modified Weighted Kaplan-Meier Estimator

    Directory of Open Access Journals (Sweden)

    Mohammad Shafiq

    2007-01-01

    Full Text Available In many medical studies majority of the study subjects do not reach to the event of interest during the study period. In such situations survival probabilities can be estimated for censored observation by Kaplan Meier estimator. However in case of heavy censoring these estimates are biased and over estimate the survival probabilities. For heavy censoring a new method was proposed (Bahrawar Jan, 2005 to estimate the survival probabilities by weighting the censored observations by non-censoring rate. But the main defect in this weighted method is that it gives zero weight to the last censored observation. To over come this difficulty a new weight is proposed which also gives a non-zero weight to the last censored observation.

  7. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi

    2017-04-12

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  8. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  9. Another look at the Grubbs estimators

    KAUST Repository

    Lombard, F.

    2012-01-01

    We consider estimation of the precision of a measuring instrument without the benefit of replicate observations on heterogeneous sampling units. Grubbs (1948) proposed an estimator which involves the use of a second measuring instrument, resulting in a pair of observations on each sampling unit. Since the precisions of the two measuring instruments are generally different, these observations cannot be treated as replicates. Very large sample sizes are often required if the standard error of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit variance to instrument variance is available. Our results are presented in the context of the evaluation of on-line analyzers. A data set from an analyzer evaluation is used to illustrate the methodology. © 2011 Elsevier B.V.

  10. Self-estimates of attention performance

    Directory of Open Access Journals (Sweden)

    CHRISTOPH MENGELKAMP

    2007-09-01

    Full Text Available In research on self-estimated IQ, gender differences are often found. The present study investigates whether these findings are true for self-estimation of attention, too. A sample of 100 female and 34 male students were asked to fill in the test of attention d2. After taking the test, the students estimated their results in comparison to their fellow students. The results show that the students underestimate their percent rank compared with the actual percent rank they achieved in the test, but estimate their rank order fairly accurately. Moreover, males estimate their performance distinctly higher than females do. This last result remains true even when the real test score is statistically controlled. The results are discussed with regard to research on positive illusions and gender stereotypes.

  11. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  12. Nonparametric Collective Spectral Density Estimation and Clustering

    KAUST Repository

    Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo

    2017-01-01

    In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at

  13. COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS

    Directory of Open Access Journals (Sweden)

    Ricardo de Melo e Silva Accioly

    2017-03-01

    Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.

  14. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  15. Sparse DOA estimation with polynomial rooting

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren

    2015-01-01

    Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...

  16. Contractor-style tunnel cost estimating

    International Nuclear Information System (INIS)

    Scapuzzi, D.

    1990-06-01

    Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs

  17. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  18. Spectral Velocity Estimation in the Transverse Direction

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    A method for estimating the velocity spectrum for a fully transverse flow at a beam-to-flow angle of 90is described. The approach is based on the transverse oscillation (TO) method, where an oscillation across the ultrasound beam is made during receive processing. A fourth-order estimator based...... on the correlation of the received signal is derived. A Fourier transform of the correlation signal yields the velocity spectrum. Performing the estimation for short data segments gives the velocity spectrum as a function of time as for ordinary spectrograms, and it also works for a beam-to-flow angle of 90...... estimation scheme can reliably find the spectrum at 90, where a traditional estimator yields zero velocity. Measurements have been conducted with the SARUS experimental scanner and a BK 8820e convex array transducer (BK Medical, Herlev, Denmark). A CompuFlow 1000 (Shelley Automation, Inc, Toronto, Canada...

  19. Solar constant values for estimating solar radiation

    International Nuclear Information System (INIS)

    Li, Huashan; Lian, Yongwang; Wang, Xianlong; Ma, Weibin; Zhao, Liang

    2011-01-01

    There are many solar constant values given and adopted by researchers, leading to confusion in estimating solar radiation. In this study, some solar constant values collected from literature for estimating solar radiation with the Angstroem-Prescott correlation are tested in China using the measured data between 1971 and 2000. According to the ranking method based on the t-statistic, a strategy to select the best solar constant value for estimating the monthly average daily global solar radiation with the Angstroem-Prescott correlation is proposed. -- Research highlights: → The effect of the solar constant on estimating solar radiation is investigated. → The investigation covers a diverse range of climate and geography in China. → A strategy to select the best solar constant for estimating radiation is proposed.

  20. Parameter estimation for an expanding universe

    Directory of Open Access Journals (Sweden)

    Jieci Wang

    2015-03-01

    Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.

  1. Assessing the performance of dynamical trajectory estimates

    Science.gov (United States)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  2. Minimum Distance Estimation on Time Series Analysis With Little Data

    National Research Council Canada - National Science Library

    Tekin, Hakan

    2001-01-01

    .... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...

  3. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA

    OpenAIRE

    Huang, G; Nix, AR; Armour, SMD

    2010-01-01

    Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...

  4. 2015-2016 Palila abundance estimates

    Science.gov (United States)

    Camp, Richard J.; Brinck, Kevin W.; Banko, Paul C.

    2016-01-01

    The palila (Loxioides bailleui) population was surveyed annually during 1998−2016 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2015 population was estimated at 852−1,406 birds (point estimate: 1,116) and the 2016 population was estimated at 1,494−2,385 (point estimate: 1,934). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2016; the proportion of the total annual detections in each count ranged from 46% to 56%; and there was no difference in the detection probability due to count sequence. Furthermore, conducting repeat counts improved the abundance estimates by reducing the width of the confidence intervals between 9% and 32% annually. This suggests that multiple counts do not affect bird or observer behavior and can be continued in the future to improve the precision of abundance estimates. Five palila were detected on supplemental survey stations in the Ka‘ohe restoration area, outside the core survey area but still within Palila Critical Habitat (one in 2015 and four in 2016), suggesting that palila are present in habitat that is recovering from cattle grazing on the southwest slope. The average rate of decline during 1998−2016 was 150 birds per year. Over the 18-year monitoring period, the estimated rate of change equated to a 58% decline in the population.

  5. Procedure for estimating permanent total enclosure costs

    Energy Technology Data Exchange (ETDEWEB)

    Lukey, M E; Prasad, C; Toothman, D A; Kaplan, N

    1999-07-01

    Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.

  6. Quantifying Uncertainty in Soil Volume Estimates

    International Nuclear Information System (INIS)

    Roos, A.D.; Hays, D.C.; Johnson, R.L.; Durham, L.A.; Winters, M.

    2009-01-01

    Proper planning and design for remediating contaminated environmental media require an adequate understanding of the types of contaminants and the lateral and vertical extent of contamination. In the case of contaminated soils, this generally takes the form of volume estimates that are prepared as part of a Feasibility Study for Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites and/or as part of the remedial design. These estimates are typically single values representing what is believed to be the most likely volume of contaminated soil present at the site. These single-value estimates, however, do not convey the level of confidence associated with the estimates. Unfortunately, the experience has been that pre-remediation soil volume estimates often significantly underestimate the actual volume of contaminated soils that are encountered during the course of remediation. This underestimation has significant implications, both technically (e.g., inappropriate remedial designs) and programmatically (e.g., establishing technically defensible budget and schedule baselines). Argonne National Laboratory (Argonne) has developed a joint Bayesian/geostatistical methodology for estimating contaminated soil volumes based on sampling results, that also provides upper and lower probabilistic bounds on those volumes. This paper evaluates the performance of this method in a retrospective study that compares volume estimates derived using this technique with actual excavated soil volumes for select Formerly Utilized Sites Remedial Action Program (FUSRAP) Maywood properties that have completed remedial action by the U.S. Army Corps of Engineers (USACE) New York District. (authors)

  7. Observer-Based Human Knee Stiffness Estimation.

    Science.gov (United States)

    Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen

    2017-05-01

    We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.

  8. Parametric cost estimation for space science missions

    Science.gov (United States)

    Lillie, Charles F.; Thompson, Bruce E.

    2008-07-01

    Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.

  9. Hydrogen Station Cost Estimates: Comparing Hydrogen Station Cost Calculator Results with other Recent Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Penev, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-09-01

    This report compares hydrogen station cost estimates conveyed by expert stakeholders through the Hydrogen Station Cost Calculation (HSCC) to a select number of other cost estimates. These other cost estimates include projections based upon cost models and costs associated with recently funded stations.

  10. 7 CFR 1435.301 - Annual estimates and quarterly re-estimates.

    Science.gov (United States)

    2010-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing..., estimates, and re-estimates in this subpart will use available USDA statistics and estimates of production, consumption, and stocks, taking into account, where appropriate, data supplied in reports submitted pursuant...

  11. 16 CFR 305.5 - Determinations of estimated annual energy consumption, estimated annual operating cost, and...

    Science.gov (United States)

    2010-01-01

    ... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND... § 305.5 Determinations of estimated annual energy consumption, estimated annual operating cost, and...

  12. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  13. Development of realtime cognitive state estimator

    International Nuclear Information System (INIS)

    Takahashi, Makoto; Kitamura, Masashi; Yoshikaea, Hidekazu

    2004-01-01

    The realtime cognitive state estimator based on the set of physiological measures has been developed in order to provide valuable information on the human behavior during the interaction through the Man-Machine Interface. The artificial neural network has been adopted to categorize the cognitive states by using the qualitative physiological data pattern as the inputs. The laboratory experiments, in which the subjects' cognitive states were intentionally controlled by the task presented, were performed to obtain training data sets for the neural network. The developed system has been shown to be capable of estimating cognitive state with higher accuracy and realtime estimation capability has also been confirmed through the data processing experiments. (author)

  14. The MIRD method of estimating absorbed dose

    International Nuclear Information System (INIS)

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine

  15. Implementing Estimation of Capacity for Freeway Sections

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2011-01-01

    Full Text Available Based on the stochastic concept for freeway capacity, the procedure of capacity estimation is developed. Due to the fact that it is impossible to observe the value of the capacity and to obtain the probability distribution of the capacity, the product-limit method is used in this paper to estimate the capacity. In order to implement estimation of capacity using this technology, the lifetime table based on statistical methods for lifetime data analysis is introduced and the corresponding procedure is developed. Simulated data based on freeway sections in Beijing, China, were analyzed and the results indicate that the methodology and procedure are applicable and validated.

  16. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  17. Software Estimation Demystifying the Black Art

    CERN Document Server

    McConnell, Steve

    2009-01-01

    Often referred to as the "black art" because of its complexity and uncertainty, software estimation is not as difficult or puzzling as people think. In fact, generating accurate estimates is straightforward-once you understand the art of creating them. In his highly anticipated book, acclaimed author Steve McConnell unravels the mystery to successful software estimation-distilling academic information and real-world experience into a practical guide for working software professionals. Instead of arcane treatises and rigid modeling techniques, this guide highlights a proven set of procedures,

  18. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  19. Solar radiation estimation based on the insolation

    International Nuclear Information System (INIS)

    Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.

    1998-01-01

    A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt

  20. Estimating the costs of human space exploration

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1994-01-01

    The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.

  1. Adaptive measurement selection for progressive damage estimation

    Science.gov (United States)

    Zhou, Wenfan; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Chattopadhyay, Aditi; Peralta, Pedro

    2011-04-01

    Noise and interference in sensor measurements degrade the quality of data and have a negative impact on the performance of structural damage diagnosis systems. In this paper, a novel adaptive measurement screening approach is presented to automatically select the most informative measurements and use them intelligently for structural damage estimation. The method is implemented efficiently in a sequential Monte Carlo (SMC) setting using particle filtering. The noise suppression and improved damage estimation capability of the proposed method is demonstrated by an application to the problem of estimating progressive fatigue damage in an aluminum compact-tension (CT) sample using noisy PZT sensor measurements.

  2. Psychological methods of subjective risk estimates

    International Nuclear Information System (INIS)

    Zimolong, B.

    1980-01-01

    Reactions to situations involving risks can be divided into the following parts/ perception of danger, subjective estimates of the risk and risk taking with respect to action. Several investigations have compared subjective estimates of the risk with an objective measure of that risk. In general there was a mis-match between subjective and objective measures of risk, especially, objective risk involved in routine activities is most commonly underestimated. This implies, for accident prevention, that attempts must be made to induce accurate subjective risk estimates by technical and behavioural measures. (orig.) [de

  3. Estimating Hedonic Price Indices for Ground Vehicles

    Science.gov (United States)

    2015-06-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Estimating Hedonic Price Indices for Ground Vehicles (Presentation) David M. Tate Stanley...gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding this burden estimate or any...currently valid OMB control number. 1. REPORT DATE JUN 2015 2. REPORT TYPE 3. DATES COVERED 4. TITLE AND SUBTITLE Estimating Hedonic Price

  4. Angle of arrival estimation using spectral interferometry

    International Nuclear Information System (INIS)

    Barber, Z.W.; Harrington, C.; Thiel, C.W.; Babbitt, W.R.; Krishna Mohan, R.

    2010-01-01

    We have developed a correlative signal processing concept based on a Mach-Zehnder interferometer and spatial-spectral (S2) materials that enables direct mapping of RF spectral phase as well as power spectral recording. This configuration can be used for precise frequency resolved time delay estimation between signals received by a phased antenna array system that in turn could be utilized to estimate the angle of arrival. We present an analytical theoretical model and a proof-of-principle demonstration of the concept of time difference of arrival estimation with a cryogenically cooled Tm:YAG crystal that operates on microwave signals modulated onto a stabilized optical carrier at 793 nm.

  5. Angle of arrival estimation using spectral interferometry

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Z.W.; Harrington, C.; Thiel, C.W.; Babbitt, W.R. [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States); Krishna Mohan, R., E-mail: krishna@spectrum.montana.ed [Spectrum Lab, Montana State University, Bozeman, MT 59717 (United States)

    2010-09-15

    We have developed a correlative signal processing concept based on a Mach-Zehnder interferometer and spatial-spectral (S2) materials that enables direct mapping of RF spectral phase as well as power spectral recording. This configuration can be used for precise frequency resolved time delay estimation between signals received by a phased antenna array system that in turn could be utilized to estimate the angle of arrival. We present an analytical theoretical model and a proof-of-principle demonstration of the concept of time difference of arrival estimation with a cryogenically cooled Tm:YAG crystal that operates on microwave signals modulated onto a stabilized optical carrier at 793 nm.

  6. State Energy Data Report, 1991: Consumption estimates

    International Nuclear Information System (INIS)

    1993-05-01

    The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to the Government, policy makers, and the public; and (2) to provide the historical series necessary for EIA's energy models

  7. Preliminary cost estimating for the nuclear industry

    International Nuclear Information System (INIS)

    Klumpar, I.V.; Soltz, K.M.

    1985-01-01

    The nuclear industry has higher costs for personnel, equipment, construction, and engineering than conventional industry, which means that cost estimation procedures may need adjustment. The authors account for the special technical and labor requirements of the nuclear industry in making adjustments to equipment and installation cost estimations. Using illustrative examples, they show that conventional methods of preliminary cost estimation are flexible enough for application to emerging industries if their cost structure is similar to that of the process industries. If not, modifications can provide enough engineering and cost data for a statistical analysis. 9 references, 14 figures, 4 tables

  8. Cost Estimation and Control for Flight Systems

    Science.gov (United States)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  9. State energy data report 1995 - consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sectors. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public, and (2) to provide the historical series necessary for EIA`s energy models.

  10. State energy data report 1993: Consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public; and (2) to provide the historical series necessary for EIA`s energy models.

  11. Estimation of quasi-critical reactivity

    International Nuclear Information System (INIS)

    Racz, A.

    1992-02-01

    The bank of Kalman filter method for reactivity and neutron density estimation originally suggested by D'Attellis and Cortina is critically overviewed. It is pointed out that the procedure cannot be applied reliably in such a form as the authors proposed, due to the filter divegence. An improved method, which is free from devergence problems are presented, as well. A new estimation technique is proposed and tested using computer simulation results. The procedure is applied for the estimation of small reactivity changes. (R.P.) 9 refs.; 2 figs.; 2 tabs

  12. Estimate of $B(K\\to\\pi\

    CERN Document Server

    Kettell, S H; Nguyen, H

    2003-01-01

    We estimate $B(\\kzpnn)$ in the context of the Standard Model by fitting for \\lamt $\\equiv V_{td}V^*_{ts}$ of the `kaon unitarity triangle' relation. We fit data from \\ek, the CP-violating parameter describing $K$-mixing, and \\apsiks, the CP-violating asymmetry in \\bpsiks decays. Our estimate is independent of the CKM matrix element \\vcb and of the ratio of B-mixing frequencies \\bsbd. The measured value of $B(\\kpnn)$ can be compared both to this estimate and to predictions made from \\bsbd.

  13. Nuclear shipping and waste disposal cost estimates

    International Nuclear Information System (INIS)

    Hudson, C.R. II.

    1977-11-01

    Cost estimates for the shipping of spent fuel from the reactor, shipping of waste from the reprocessing plant, and disposal of reprocessing plant wastes have been made for five reactor types. The reactors considered are the light-water reactor (LWR), the mixed-oxide-fueled light-water reactor (MOX), the Canadian deuterium-uranium reactor (CANDU), the fast breeder reactor (FBR), and the high-temperature gas-cooled reactor (HTGR). In addition to the cost estimates, this report provides details on the bases and assumptions used to develop the cost estimates

  14. Online wave estimation using vessel motion measurements

    DEFF Research Database (Denmark)

    H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir

    2018-01-01

    parameters and motion transfer functions are required as input. Apart from this the method is signal-based, with no assumptions on the wave spectrum shape, and as a result it is computationally efficient. The algorithm is implemented in a dynamic positioning (DP)control system, and tested through simulations......In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel...

  15. Methodology for generating waste volume estimates

    International Nuclear Information System (INIS)

    Miller, J.Q.; Hale, T.; Miller, D.

    1991-09-01

    This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs

  16. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  17. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  18. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  19. State Alcohol-Impaired-Driving Estimates

    Science.gov (United States)

    ... 2012 Data DOT HS 812 017 May 2014 State Alcohol-Impaired-Driving Estimates This fact sheet contains ... alcohol involvement in fatal crashes for the United States and individually for the 50 States, the District ...

  20. Wave Velocity Estimation in Heterogeneous Media

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space-time dependent unknown velocity in the wave equation. The proposed method simplifies the identification problem into a system of linear algebraic equations. Numerical

  1. On state estimation in electric drives

    International Nuclear Information System (INIS)

    Leon, A.E.; Solsona, J.A.

    2010-01-01

    This paper deals with state estimation in electric drives. On one hand a nonlinear observer is designed, whereas on the other hand the speed state is estimated by using the dirty derivative from the position measured. The dirty derivative is an approximate version of the perfect derivative which introduces an estimation error few times analyzed in drive applications. For this reason, our proposal in this work consists in illustrating several aspects on the performance of the dirty derivator in presence of both model uncertainties and noisy measurements. To this end, a case study is introduced. The case study considers rotor speed estimation in a permanent magnet stepper motor, by assuming that rotor position and electrical variables are measured. In addition, this paper presents comments about the connection between dirty derivators and observers, and advantages and disadvantages of both techniques are also remarked.

  2. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  3. Nonlinear estimation and control of automotive drivetrains

    CERN Document Server

    Chen, Hong

    2014-01-01

    Nonlinear Estimation and Control of Automotive Drivetrains discusses the control problems involved in automotive drivetrains, particularly in hydraulic Automatic Transmission (AT), Dual Clutch Transmission (DCT) and Automated Manual Transmission (AMT). Challenging estimation and control problems, such as driveline torque estimation and gear shift control, are addressed by applying the latest nonlinear control theories, including constructive nonlinear control (Backstepping, Input-to-State Stable) and Model Predictive Control (MPC). The estimation and control performance is improved while the calibration effort is reduced significantly. The book presents many detailed examples of design processes and thus enables the readers to understand how to successfully combine purely theoretical methodologies with actual applications in vehicles. The book is intended for researchers, PhD students, control engineers and automotive engineers. Hong Chen is a professor at the State Key Laboratory of Automotive Simulation and...

  4. Directional Transverse Oscillation Vector Flow Estimation

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2017-01-01

    A method for estimating vector velocities using transverse oscillation (TO) combined with directional beamforming is presented. In Directional Transverse Oscillation (DTO) a normal focused field is emitted and the received signals are beamformed in the lateral direction transverse to the ultrasound...... beam to increase the amount of data for vector velocity estimation. The approach is self-calibrating as the lateral oscillation period is estimated from the directional signal through a Fourier transform to yield quantitative velocity results over a large range of depths. The approach was extensively...... simulated using Field IIpro and implemented on the experimental SARUS scanner in connection with a BK Medical 8820e convex array transducer. Velocity estimates for DTO are found for beam-to-flow angles of 60, 75, and 90, and vessel depths from 24 to 156 mm. Using 16 emissions the Standard Deviation (SD...

  5. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  6. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  7. Cancer Related-Knowledge - Small Area Estimates

    Science.gov (United States)

    These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.

  8. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  9. Using groundwater levels to estimate recharge

    Science.gov (United States)

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  10. Food irradiation : estimates of cost of processing

    International Nuclear Information System (INIS)

    Krishnamurthy, K.; Bongirwar, D.R.

    1987-01-01

    For estimating the cost of food irradiation, three factors have to be taken into consideration. These are : (1) capital cost incurred on irradiation device and its installation, (2) recurring or running cost which includes maintenance cost and operational expenditure, and (3) product specific cost dependent on the factors specific to the food item to be processed, its storage, handling and distribution. A simple method is proposed to provide estimates of capital costs and running costs and it is applied to prepare a detailed estimate of costs for irradiation processing of onions and fish in India. The cost of processing onions worked out to be between Rs. 40 to 120 per 1000 Kg and for fish Rs 354 per 1000 Kg. These estimates do not take into account transparation costs and fluctuations in marketing procedures. (M.G.B.). 7 tables

  11. Uncertainties in the estimation of Mmax

    Indian Academy of Sciences (India)

    local site conditions and expanded for a region, ... A case study of estimation of Mmax for ... Delhi, enters the city from north and flows south- ... not been considered for developing relationships ..... denotes the probability of seismic network to.

  12. Optimal Smoothing in Adaptive Location Estimation

    OpenAIRE

    Mammen, Enno; Park, Byeong U.

    1997-01-01

    In this paper higher order performance of kernel basedadaptive location estimators are considered. Optimalchoice of smoothing parameters is discussed and it isshown how much is lossed in efficiency by not knowingthe underlying translation density.

  13. Statistics of Parameter Estimates: A Concrete Example

    KAUST Repository

    Aguilar, Oscar; Allmaras, Moritz; Bangerth, Wolfgang; Tenorio, Luis

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Most mathematical models include parameters that need to be determined from measurements. The estimated values of these parameters and their uncertainties depend on assumptions made about noise

  14. Estimated radiation dose from timepieces containing tritium

    International Nuclear Information System (INIS)

    McDowell-Boyer, L.M.

    1980-01-01

    Luminescent timepieces containing radioactive tritium, either in elemental form or incorporated into paint, are available to the general public. The purpose of this study was to estimate potential radiation dose commitments received by the public annually as a result of exposure to tritium which may escape from the timepieces during their distribution, use, repair, and disposal. Much uncertainty is associated with final dose estimates due to limitations of empirical data from which exposure parameters were derived. Maximum individual dose estimates were generally less than 3 μSv/yr, but ranged up to 2 mSv under worst-case conditions postulated. Estimated annual collective (population) doses were less than 5 person/Sv per million timepieces distributed

  15. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra.The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp , including a new result for type 1,1-operators that they are always bounded on Lp -functions with compact spectra....

  16. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    2011-01-01

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra. The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp, including a new result for type 1, 1-operators that they are always bounded on Lp-functions with compact spectra....

  17. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  18. Global Population Density Grid Time Series Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...

  19. Estimating carbon stock in secondary forests

    DEFF Research Database (Denmark)

    Breugel, Michiel van; Ransijn, Johannes; Craven, Dylan

    2011-01-01

    of trees and species for destructive biomass measurements. We assess uncertainties associated with these decisions using data from 94 secondary forest plots in central Panama and 244 harvested trees belonging to 26 locally abundant species. AGB estimates from species-specific models were used to assess...... is the use of allometric regression models to convert forest inventory data to estimates of aboveground biomass (AGB). The use of allometric models implies decisions on the selection of extant models or the development of a local model, the predictor variables included in the selected model, and the number...... relative errors of estimates from multispecies models. To reduce uncertainty in the estimation of plot AGB, including wood specific gravity (WSG) in the model was more important than the number of trees used for model fitting. However, decreasing the number of trees increased uncertainty of landscape...

  20. Climate change trade measures : estimating industry effects

    Science.gov (United States)

    2009-06-01

    Estimating the potential effects of domestic emissions pricing for industries in the United States is complex. If the United States were to regulate greenhouse gas emissions, production costs could rise for certain industries and could cause output, ...