Pedestrian Stride Length Estimation from IMU Measurements and ANN Based Algorithm
Directory of Open Access Journals (Sweden)
Haifeng Xing
2017-01-01
Full Text Available Pedestrian dead reckoning (PDR can be used for continuous position estimation when satellite or other radio signals are not available, and the accuracy of the stride length measurement is important. Current stride length estimation algorithms, including linear and nonlinear models, consider a few variable factors, and some rely on high precision and high cost equipment. This paper puts forward a stride length estimation algorithm based on a back propagation artificial neural network (BP-ANN, using a consumer-grade inertial measurement unit (IMU; it then discusses various factors in the algorithm. The experimental results indicate that the error of the proposed algorithm in estimating the stride length is approximately 2%, which is smaller than that of the frequency and nonlinear models. Compared with the latter two models, the proposed algorithm does not need to determine individual parameters in advance if the trained neural net is effective. It can, thus, be concluded that this algorithm shows superior performance in estimating pedestrian stride length.
The importance of stride length and stride frequency in middle ...
African Journals Online (AJOL)
... also found that the better runners have faster stride frequencies and that provincial middle distance runners use lower stride frequencies than international middle distance runners. Key Words: Biomechanics, stride length, stride frequency, maximum oxygen consumption, leg length, middle distance runners, road runners.
Connick, Mark J; Li, Francois-Xavier
2014-01-01
Large alterations to the preferred running stride decrease running economy, and shorter strides increase leg muscle activity. However, the effect of altered strides on the timing of leg muscle activation is not known. The aim of this study was to evaluate the impact of moderate alterations to the running stride on running economy and the timing of biceps femoris (BF), vastus lateralis (VL) and gastrocnemius (GAST) muscle contractions. The preferred stride pattern for eleven trained male runners was measured prior to a separate visit where participants ran for bouts of 5 min whilst synchronising foot contacts to a metronome signal which was tuned to (1) the preferred stride, and (2) frequencies which related to ± 8% and ± 4% of the preferred stride length. Running economy was measured at each stride pattern along with electromyography and three-dimensional kinematics to estimate onset and offset of muscle contractions for each muscle. Running economy was greatest at the preferred stride length. However, a quadratic fit to the data was optimised at a stride which was 2.9% shorter than preferred. Onset and offset of BF and VL muscle contractions occurred earlier with shorter than preferred strides. We detected no changes to the timing of muscle contractions with longer than preferred strides and no changes to GAST muscle contractions. The results suggest that runners optimise running economy with a stride length that is close to, but shorter than, the preferred stride, and that timing of BF and VL muscle contractions change with shorter than preferred strides. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of accelerometry stride time calculation methods.
Norris, Michelle; Kenny, Ian C; Anderson, Ross
2016-09-06
Inertial sensors such as accelerometers and gyroscopes can provide a multitude of information on running gait. Running parameters such as stride time and ground contact time can all be identified within tibial accelerometry data. Within this, stride time is a popular parameter of interest, possibly due to its role in running economy. However, there are multiple methods utilised to derive stride time from tibial accelerometry data, some of which may offer complications when implemented on larger data files. Therefore, the purpose of this study was to compare previously utilised methods of stride time derivation to an original proposed method, utilising medio-lateral tibial acceleration data filtered at 2Hz, allowing for greater efficiency in stride time output. Tibial accelerometry data from six participants training for a half marathon were utilised. One right leg run was randomly selected for each participant, in which five consecutive running stride times were calculated. Four calculation methods were employed to derive stride time. A repeated measures analysis of variance (ANOVA) identified no significant difference in stride time between stride time calculation methods (p=1.00), whilst intra-class coefficient values (all >0.95) and coefficient of variance values (all method possibly offers a simplified technique for stride time output during running gait analysis. This method may be less influenced by "double peak" error and minor fluctuations within the data, allowing for accurate and efficient automated data output in both real time and post processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stride length: measuring its instantaneous value
International Nuclear Information System (INIS)
Campiglio, G C; Mazzeo, J R
2007-01-01
Human gait has been studied from different viewpoints: kinematics, dynamics, sensibility and others. Many of its characteristics still remain open to research, both for normal gait and for pathological gait. Objective measures of some of its most significant spatial/temporal parameters are important in this context. Stride length, one of these parameters, is defined as the distance between two consecutive contacts of one foot with ground. On this work we present a device designed to provide automatic measures of stride length. Its features make it particularly appropriate for the evaluation of pathological gait
Stetter, Bernd J; Buckeridge, Erica; von Tscharner, Vinzenz; Nigg, Sandro R; Nigg, Benno M
2016-02-01
This study presents a new approach for automated identification of ice hockey skating strides and a method to detect ice contact and swing phases of individual strides by quantifying vibrations in 3D acceleration data during the blade-ice interaction. The strides of a 30-m forward sprinting task, performed by 6 ice hockey players, were evaluated using a 3D accelerometer fixed to a hockey skate. Synchronized plantar pressure data were recorded as reference data. To determine the accuracy of the new method on a range of forward stride patterns for temporal skating events, estimated contact times and stride times for a sequence of 5 consecutive strides was validated. Bland-Altman limits of agreement (95%) between accelerometer and plantar pressure derived data were less than 0.019 s. Mean differences between the 2 capture methods were shown to be less than 1 ms for contact and stride time. These results demonstrate the validity of the novel approach to determine strides, ice contact, and swing phases during ice hockey skating. This technology is accurate, simple, effective, and allows for in-field ice hockey testing.
Stride time synergy in relation to walking during dual task
DEFF Research Database (Denmark)
Læssøe, Uffe; Madeleine, Pascal
2012-01-01
with a positive slope going through the mean of the strides, and bad variance with respect to a similar line with a negative slope. The general variance coefficient (CV%) was also computed. The effect of introducing a concurrent cognitive task (dual task: counting backwards in sequences of 7) was evaluated...... point of view elemental and performance variables may represent good and bad components of variability [2]. In this study we propose that the gait pattern can be seen as an on-going movement synergy in which each stride is corrected by the next stride (elemental variables) to ensure a steady gait...... (performance variable). AIM: The aim of this study was to evaluate stride time synergy and to identify good and bad stride variability in relation to walking during dual task. METHODS: Thirteen healthy young participants walked along a 2x5 meter figure-of-eight track at a self-selected comfortable speed...
Stride length asymmetry in split-belt locomotion.
Hoogkamer, Wouter; Bruijn, Sjoerd M; Duysens, Jacques
2014-01-01
The number of studies utilizing a split-belt treadmill is rapidly increasing in recent years. This has led to some confusion regarding the definitions of reported gait parameters. The purpose of this paper is to clearly present the definitions of the gait parameters that are commonly used in split-belt treadmill studies. We argue that the modified version of stride length for split-belt gait, which is different from the standard definition of stride length and actually is a measure of limb excursion, should be referred to as 'limb excursion' in future studies. Furthermore, the symmetry of stride length and stride time is specifically addressed. Copyright © 2013 Elsevier B.V. All rights reserved.
Stride time synergy in relation to walking during dual task
DEFF Research Database (Denmark)
Læssøe, Uffe; Madeleine, Pascal
2012-01-01
point of view elemental and performance variables may represent good and bad components of variability [2]. In this study we propose that the gait pattern can be seen as an on-going movement synergy in which each stride is corrected by the next stride (elemental variables) to ensure a steady gait...... (performance variable). AIM: The aim of this study was to evaluate stride time synergy and to identify good and bad stride variability in relation to walking during dual task. METHODS: Thirteen healthy young participants walked along a 2x5 meter figure-of-eight track at a self-selected comfortable speed....... RESULTS: The variance coefficient (CV%) increased significantly from 1.59 to 1.90 (psynergy approach, the good/bad variance ratio during single task was: 2.53 (CI95%: 2.07-3.00). When shifting to dual task the good/bad ratio was 2.28 (CI95...
Brahms, C Markus; Zhao, Yang; Gerhard, David; Barden, John M
2018-02-10
From a research perspective, detailed knowledge about stride length (SL) is important for coaches, clinicians and researchers because together with stride rate it determines the speed of locomotion. Moreover, individual SL vectors represent the integrated output of different biomechanical determinants and as such provide valuable insight into the control of running gait. In recent years, several studies have tried to estimate SL using body-mounted inertial measurement units (IMUs) and have reported promising results. However, many studies have used systems based on multiple sensors or have only focused on estimating SL for walking. Here we test the concurrent validity of a single foot-mounted, 9-degree of freedom IMU to estimate SL for running. We employed a running-specific, Kalman filter based zero-velocity update (ZUPT) algorithm to calculate individual SL vectors with the IMU and compared the results to SLs that were simultaneously recorded by a 6-camera 3D motion capture system. The results showed that the analytical procedures were able to successfully identify all strides that were recorded by the camera system and that excellent levels of absolute agreement (ICC(3,1) = 0.955) existed between the two methods. The findings demonstrate that individual SL vectors can be accurately estimated with a single foot-mounted IMU when running in a controlled laboratory setting. Copyright © 2018 Elsevier Ltd. All rights reserved.
Adaptation and prosthesis effects on stride-to-stride fluctuations in amputee gait.
Directory of Open Access Journals (Sweden)
Shane R Wurdeman
Full Text Available Twenty-four individuals with transtibial amputation were recruited to a randomized, crossover design study to examine stride-to-stride fluctuations of lower limb joint flexion/extension time series using the largest Lyapunov exponent (λ. Each individual wore a "more appropriate" and a "less appropriate" prosthesis design based on the subject's previous functional classification for a three week adaptation period. Results showed decreased λ for the sound ankle compared to the prosthetic ankle (F1,23 = 13.897, p = 0.001 and a decreased λ for the "more appropriate" prosthesis (F1,23 = 4.849, p = 0.038. There was also a significant effect for the time point in the adaptation period (F2,46 = 3.164, p = 0.050. Through the adaptation period, a freezing and subsequent freeing of dynamic degrees of freedom was seen as the λ at the ankle decreased at the midpoint of the adaptation period compared to the initial prosthesis fitting (p = 0.032, but then increased at the end compared to the midpoint (p = 0.042. No differences were seen between the initial fitting and the end of the adaptation for λ (p = 0.577. It is concluded that the λ may be a feasible clinical tool for measuring prosthesis functionality and adaptation to a new prosthesis is a process through which the motor control develops mastery of redundant degrees of freedom present in the system.
Hausdorff, Jeffrey M
2007-01-01
Until recently, quantitative studies of walking have typically focused on properties of a typical or average stride, ignoring the stride-to-stride fluctuations and considering these fluctuations to be noise. Work over the past two decades has demonstrated, however, that the alleged noise actually conveys important information. The magnitude of the stride-to-stride fluctuations and their changes over time during a walk – gait dynamics – may be useful in understanding the physiology of gait, in quantifying age-related and pathologic alterations in the locomotor control system, and in augmenting objective measurement of mobility and functional status Indeed, alterations in gait dynamics may help to determine disease severity, medication utility, and fall risk, and to objectively document improvements in response to therapeutic interventions, above and beyond what can be gleaned from measures based on the average, typical stride. This review discusses support for the idea that gait dynamics has meaning and may be useful in providing insight into the neural control of locomtion and for enhancing functional assessment of aging, chronic disease, and their impact on mobility. PMID:17618701
Hausdorff, Jeffrey M
2007-08-01
Until recently, quantitative studies of walking have typically focused on properties of a typical or average stride, ignoring the stride-to-stride fluctuations and considering these fluctuations to be noise. Work over the past two decades has demonstrated, however, that the alleged noise actually conveys important information. The magnitude of the stride-to-stride fluctuations and their changes over time during a walk - gait dynamics - may be useful in understanding the physiology of gait, in quantifying age-related and pathologic alterations in the locomotor control system, and in augmenting objective measurement of mobility and functional status. Indeed, alterations in gait dynamics may help to determine disease severity, medication utility, and fall risk, and to objectively document improvements in response to therapeutic interventions, above and beyond what can be gleaned from measures based on the average, typical stride. This review discusses support for the idea that gait dynamics has meaning and may be useful in providing insight into the neural control of locomotion and for enhancing functional assessment of aging, chronic disease, and their impact on mobility.
Precision and accuracy of the new XPrecia Stride mobile coagulometer.
Piacenza, Francesco; Galeazzi, Roberta; Cardelli, Maurizio; Moroni, Fausto; Provinciali, Mauro; Pierpaoli, Elisa; Giovagnetti, Simona; Appolloni, Stefania; Marchegiani, Francesca
2017-08-01
Oral anticoagulation therapy (OAT) with coumarins (vitamin K antagonist) is the most used against thromboembolism. Prothrombin time (PT) International Normalized Ratio (INR) monitoring is fundamental to establish coumarins dosage and prevent bleeding complications or thrombotic events. In this contest, the method and apparatus used for providing the INR measurements are crucial. Several studies have been published regarding the precision and accuracy of mobile coagulometers with different conclusions. No studies have been published regarding the new XPrecia Stride Mobile Coagulometer (Siemens). The aim of this work is to analyze precision and accuracy of the new XPrecia Stride mobile coagulometer to provide recommendations for clinical use and quality control. A total of 163 patients (mean age=77.4years old) under Warfarin OAT for whom the INR was assessed by both the traditional cs 2100i Sysmex and the new Xprecia Stride Mobile Coagulometer were included in this pilot study. The precision of the new mobile coagulometer resulted very good (CV15% from the true value in 20% of cases). Considering the overall results obtained by the new Xprecia Stride in comparison to that ones obtained from the other commercial devices, we can conclude that the new coagulometer is enough reliable for clinical settings. However, a larger trial to confirm these data is needed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Strides towards substantive democracy and gender perspective in ...
African Journals Online (AJOL)
Strides towards substantive democracy and gender perspective in the 21st century of Africa. Kelvin Bribena. Abstract. The weakness of the multiparty political system in Africa will be analysed in line with accepted standards for transparency, electoral provisions, as well as the free and fair establishment, assembly and ...
Extraction of stride events from gait accelerometry during treadmill walking.
Sejdić, Ervin; Lowry, Kristin A; Bellanca, Jennica; Perera, Subashan; Redfern, Mark S; Brach, Jennifer S
Evaluating stride events can be valuable for understanding the changes in walking due to aging and neurological diseases. However, creating the time series necessary for this analysis can be cumbersome. In particular, finding heel contact and toe-off events which define the gait cycles accurately are difficult. We proposed a method to extract stride cycle events from tri-axial accelerometry signals. We validated our method via data collected from 14 healthy controls, 10 participants with Parkinson's disease and 11 participants with peripheral neuropathy. All participants walked at self-selected comfortable and reduced speeds on a computer-controlled treadmill. Gait accelerometry signals were captured via a tri-axial accelerometer positioned over the L3 segment of the lumbar spine. Motion capture data were also collected and served as the comparison method. Our analysis of the accelerometry data showed that the proposed methodology was able to accurately extract heel and toe contact events from both feet. We used t-tests, ANOVA and mixed models to summarize results and make comparisons. Mean gait cycle intervals were the same as those derived from motion capture and cycle-to-cycle variability measures were within 1.5%. Subject group differences could be identified similarly using measures with the two methods. A simple tri-axial acceleromter accompanied by a signal processing algorithm can be used to capture stride events. Clinical Impact: The proposed algorithm enables the assessment of stride events during treadmill walking, and is the first step towards the assessment of stride events using tri-axial accelerometers in real-life settings.
Persistent fluctuations in stride intervals under fractal auditory stimulation.
Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas
2014-01-01
Stride sequences of healthy gait are characterized by persistent long-range correlations, which become anti-persistent in the presence of an isochronous metronome. The latter phenomenon is of particular interest because auditory cueing is generally considered to reduce stride variability and may hence be beneficial for stabilizing gait. Complex systems tend to match their correlation structure when synchronizing. In gait training, can one capitalize on this tendency by using a fractal metronome rather than an isochronous one? We examined whether auditory cues with fractal variations in inter-beat intervals yield similar fractal inter-stride interval variability as isochronous auditory cueing in two complementary experiments. In Experiment 1, participants walked on a treadmill while being paced by either an isochronous or a fractal metronome with different variation strengths between beats in order to test whether participants managed to synchronize with a fractal metronome and to determine the necessary amount of variability for participants to switch from anti-persistent to persistent inter-stride intervals. Participants did synchronize with the metronome despite its fractal randomness. The corresponding coefficient of variation of inter-beat intervals was fixed in Experiment 2, in which participants walked on a treadmill while being paced by non-isochronous metronomes with different scaling exponents. As expected, inter-stride intervals showed persistent correlations similar to self-paced walking only when cueing contained persistent correlations. Our results open up a new window to optimize rhythmic auditory cueing for gait stabilization by integrating fractal fluctuations in the inter-beat intervals.
Directory of Open Access Journals (Sweden)
Daniel eKress
2014-09-01
Full Text Available During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride
Head and Tibial Acceleration as a Function of Stride Frequency and Visual Feedback during Running.
Busa, Michael A; Lim, Jongil; van Emmerik, Richard E A; Hamill, Joseph
2016-01-01
Individuals regulate the transmission of shock to the head during running at different stride frequencies although the consequences of this on head-gaze stability remain unclear. The purpose of this study was to examine if providing individuals with visual feedback of their head-gaze orientation impacts tibial and head accelerations, shock attenuation and head-gaze motion during preferred speed running at different stride frequencies. Fifteen strides from twelve recreational runners running on a treadmill at their preferred speed were collected during five stride frequencies (preferred, ±10% and ±20% of preferred) in two visual task conditions (with and without real-time visual feedback of head-gaze orientation). The main outcome measures were tibial and head peak accelerations assessed in the time and frequency domains, shock attenuation from tibia to head, and the magnitude and velocity of head-gaze motion. Decreasing stride frequency resulted in greater vertical accelerations of the tibia (pacceleration was only observed for the slowest stride frequency condition. Visual feedback resulted in reduced head acceleration magnitude (pacceleration within a wide range of stride frequencies; only at a stride frequency 20% below preferred did head acceleration increase. Furthermore, impact accelerations of the head and tibia appear to be solely a function of stride frequency as no differences were observed between feedback conditions. Increased visual task demands through head gaze feedback resulted in reductions in head accelerations in the active portion of stance and increased head-gaze stability.
Stride angle as a novel indicator of running economy in well-trained runners.
Santos-Concejero, Jordan; Tam, Nicholas; Granados, Cristina; Irazusta, Jon; Bidaurrazaga-Letona, Iraia; Zabala-Lili, Jon; Gil, Susana M
2014-07-01
The main purpose of this study was to investigate the relationship between a novel biomechanical variable, the stride angle, and running economy (RE) in a homogeneous group of long-distance athletes. Twenty-five well-trained male runners completed 4-minute running stages on a treadmill at different set velocities. During the test, biomechanical variables such as stride angle, swing time, ground contact time, stride length, stride frequency, and the different sub-phases of ground contact were recorded using an optical measurement system. VO2 values at velocities below the lactate threshold were measured to calculate RE. Stride angle was negatively correlated with RE at every speed (p contact time and running performance according to the best 10-km race time (p ≤ 0.05, moderate and large effect sizes). Last, stride angle was correlated with ground contact time at every speed (p angle allows runners to minimize contact time during ground contact, whereby facilitating a better RE. Coaches and/or athletes may find stride angle a useful and easily obtainable measure to track and make alterations to running technique, because changes in stride angle may influence the energy cost of running and lead to improved performance.
Directory of Open Access Journals (Sweden)
Sejdić Ervin
2010-02-01
Full Text Available Abstract Background Stride interval persistence, a term used to describe the correlation structure of stride interval time series, is thought to provide insight into neuromotor control, though its exact clinical meaning has not yet been realized. Since human locomotion is shaped by energy efficient movements, it has been hypothesized that stride interval dynamics and energy expenditure may be inherently tied, both having demonstrated similar sensitivities to age, disease, and pace-constrained walking. Findings This study tested for correlations between stride interval persistence and measures of energy expenditure including mass-specific gross oxygen consumption per minute (, mass-specific gross oxygen cost per meter (VO2 and heart rate (HR. Metabolic and stride interval data were collected from 30 asymptomatic children who completed one 10-minute walking trial under each of the following conditions: (i overground walking, (ii hands-free treadmill walking, and (iii handrail-supported treadmill walking. Stride interval persistence was not significantly correlated with (p > 0.32, VO2 (p > 0.18 or HR (p > 0.56. Conclusions No simple linear dependence exists between stride interval persistence and measures of gross energy expenditure in asymptomatic children when walking overground and on a treadmill.
Directory of Open Access Journals (Sweden)
Tura Andrea
2012-02-01
Full Text Available Abstract Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP and ten control subjects (CTRL were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice. Reference values of step and stride regularity indices (Ad1 and Ad2 were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals. At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees.
Unfavorable Strides in Cache Memory Systems (RNR Technical Report RNR-92-015
Directory of Open Access Journals (Sweden)
David H. Bailey
1995-01-01
Full Text Available An important issue in obtaining high performance on a scientific application running on a cache-based computer system is the behavior of the cache when data are accessed at a constant stride. Others who have discussed this issue have noted an odd phenomenon in such situations: A few particular innocent-looking strides result in sharply reduced cache efficiency. In this article, this problem is analyzed, and a simple formula is presented that accurately gives the cache efficiency for various cache parameters and data strides.
Select injury-related variables are affected by stride length and foot strike style during running.
Boyer, Elizabeth R; Derrick, Timothy R
2015-09-01
Some frontal plane and transverse plane variables have been associated with running injury, but it is not known if they differ with foot strike style or as stride length is shortened. To identify if step width, iliotibial band strain and strain rate, positive and negative free moment, pelvic drop, hip adduction, knee internal rotation, and rearfoot eversion differ between habitual rearfoot and habitual mid-/forefoot strikers when running with both a rearfoot strike (RFS) and a mid-/forefoot strike (FFS) at 3 stride lengths. Controlled laboratory study. A total of 42 healthy runners (21 habitual rearfoot, 21 habitual mid-/forefoot) ran overground at 3.35 m/s with both a RFS and a FFS at their preferred stride lengths and 5% and 10% shorter. Variables did not differ between habitual groups. Step width was 1.5 cm narrower for FFS, widening to 0.8 cm as stride length shortened. Iliotibial band strain and strain rate did not differ between foot strikes but decreased as stride length shortened (0.3% and 1.8%/s, respectively). Pelvic drop was reduced 0.7° for FFS compared with RFS, and both pelvic drop and hip adduction decreased as stride length shortened (0.8° and 1.5°, respectively). Peak knee internal rotation was not affected by foot strike or stride length. Peak rearfoot eversion was not different between foot strikes but decreased 0.6° as stride length shortened. Peak positive free moment (normalized to body weight [BW] and height [h]) was not affected by foot strike or stride length. Peak negative free moment was -0.0038 BW·m/h greater for FFS and decreased -0.0004 BW·m/h as stride length shortened. The small decreases in most variables as stride length shortened were likely associated with the concomitant wider step width. RFS had slightly greater pelvic drop, while FFS had slightly narrower step width and greater negative free moment. Shortening one's stride length may decrease or at least not increase propensity for running injuries based on the variables
Effects of changing the random number stride in Monte Carlo calculations
International Nuclear Information System (INIS)
Hendricks, J.S.
1991-01-01
This paper reports on a common practice in Monte Carlo radiation transport codes which is to start each random walk a specified number of steps up the random number sequence from the previous one. This is called the stride in the random number sequence between source particles. It is used for correlated sampling or to provide tree-structured random numbers. A new random number generator algorithm for the major Monte Carlo code MCNP has been written to allow adjustment of the random number stride. This random number generator is machine portable. The effects of varying the stride for several sample problems are examined
Hausdorff, J. M.; Mitchell, S. L.; Firtion, R.; Peng, C. K.; Cudkowicz, M. E.; Wei, J. Y.; Goldberger, A. L.
1997-01-01
Fluctuations in the duration of the gait cycle (the stride interval) display fractal dynamics and long-range correlations in healthy young adults. We hypothesized that these stride-interval correlations would be altered by changes in neurological function associated with aging and certain disease states. To test this hypothesis, we compared the stride-interval time series of 1) healthy elderly subjects and young controls and of 2) subjects with Huntington's disease and healthy controls. Using detrended fluctuation analysis we computed alpha, a measure of the degree to which one stride interval is correlated with previous and subsequent intervals over different time scales. The scaling exponent alpha was significantly lower in elderly subjects compared with young subjects (elderly: 0.68 +/- 0.14; young: 0.87 +/- 0.15; P elderly subjects and in subjects with Huntington's disease. Abnormal alterations in the fractal properties of gait dynamics are apparently associated with changes in central nervous system control.
Ojeda, Lauro V; Rebula, John R; Kuo, Arthur D; Adamczyk, Peter G
2015-10-01
Walking is not always a free and unencumbered task. Everyday activities such as walking in pairs, in groups, or on structured walkways can limit the acceptable gait patterns, leading to motor behavior that differs from that observed in more self-selected gait. Such different contexts may lead to gait performance different than observed in typical laboratory experiments, for example, during treadmill walking. We sought to systematically measure the impact of such task constraints by comparing gait parameters and their variability during walking in different conditions over-ground, and on a treadmill. We reconstructed foot motion from foot-mounted inertial sensors, and characterized forward, lateral and angular foot placement while subjects walked over-ground in a straight hallway and on a treadmill. Over-ground walking was performed in three variations: with no constraints (self-selected, SS); while deliberately varying walking speed (self-varied, SV); and while following a toy pace car programmed to vary speed (externally-varied, EV). We expected that these conditions would exhibit a statistically similar relationship between stride length and speed, and between stride length and stride period. We also expected treadmill walking (TM) would differ in two ways: first, that variability in stride length and stride period would conform to a constant-speed constraint opposite in slope from the normal relationship; and second, that stride length would decrease, leading to combinations of stride length and speed not observed in over-ground conditions. Results showed that all over-ground conditions used similar stride length-speed relationships, and that variability in treadmill walking conformed to a constant-speed constraint line, as expected. Decreased stride length was observed in both TM and EV conditions, suggesting adaptations due to heightened awareness or to prepare for unexpected changes or problems. We also evaluated stride variability in constrained and
Knee Stretch Walking Method for Biped Robot: Using Toe and Heel Joints to Increase Walking Strides
Sato, Takahiko; Shimmyo, Shuhei; Nakazato, Miki; Mikami, Kei; Sato, Tomoya; Sakaino, Sho; Ohnishi, Kouhei
This paper proposes a knee stretch walking method for biped robots; the method involves the use of the toes and heel joints to increase walking strides. A knee can be stretched by switching control variables. By a knee stretch walking with heel contacts to the ground and toe takeoffs from the ground, biped robots can increase their walking stride and speed. The validity of the proposed method is confirmed by simulation and experimental results.
Hausdorff, J. M.; Cudkowicz, M. E.; Firtion, R.; Wei, J. Y.; Goldberger, A. L.
1998-01-01
The basal ganglia are thought to play an important role in regulating motor programs involved in gait and in the fluidity and sequencing of movement. We postulated that the ability to maintain a steady gait, with low stride-to-stride variability of gait cycle timing and its subphases, would be diminished with both Parkinson's disease (PD) and Huntington's disease (HD). To test this hypothesis, we obtained quantitative measures of stride-to-stride variability of gait cycle timing in subjects with PD (n = 15), HD (n = 20), and disease-free controls (n = 16). All measures of gait variability were significantly increased in PD and HD. In subjects with PD and HD, gait variability measures were two and three times that observed in control subjects, respectively. The degree of gait variability correlated with disease severity. In contrast, gait speed was significantly lower in PD, but not in HD, and average gait cycle duration and the time spent in many subphases of the gait cycle were similar in control subjects, HD subjects, and PD subjects. These findings are consistent with a differential control of gait variability, speed, and average gait cycle timing that may have implications for understanding the role of the basal ganglia in locomotor control and for quantitatively assessing gait in clinical settings.
Project Stride: An Equine-Assisted Intervention to Reduce Symptoms of Social Anxiety in Young Women.
Alfonso, Sarah V; Alfonso, Lauren A; Llabre, Maria M; Fernandez, M Isabel
2015-01-01
Although there is evidence supporting the use of equine-assisted activities to treat mental disorders, its efficacy in reducing signs and symptoms of social anxiety in young women has not been examined. We developed and pilot tested Project Stride, a brief, six-session intervention combining equine-assisted activities and cognitive-behavioral strategies to reduce symptoms of social anxiety. A total of 12 women, 18-29 years of age, were randomly assigned to Project Stride or a no-treatment control. Participants completed the Liebowitz Social Anxiety Scale at baseline, immediate-post, and 6 weeks after treatment. Project Stride was highly acceptable and feasible. Compared to control participants, those in Project Stride had significantly greater reductions in social anxiety scores from baseline to immediate-post [decrease of 24.8 points; t (9) = 3.40, P = .008)] and from baseline to follow-up [decrease of 31.8 points; t (9) = 4.12, P = .003)]. These findings support conducting a full-scale efficacy trial of Project Stride. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin
2017-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibiting self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a "biomarker" of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of 16 mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Martin Buchheit, Andrew Gray, Jean-Benoit Morin
2015-12-01
Full Text Available The aim of the present study was to examine the ability of a GPS-imbedded accelerometer to assess stride variables and vertical stiffness (K, which are directly related to neuromuscular fatigue during field-based high-intensity runs. The ability to detect stride imbalances was also examined. A team sport player performed a series of 30-s runs on an instrumented treadmill (6 runs at 10, 17 and 24 km·h-1 with or without his right ankle taped (aimed at creating a stride imbalance, while wearing on his back a commercially-available GPS unit with an embedded 100-Hz tri-axial accelerometer. Contact (CT and flying (FT time, and K were computed from both treadmill and accelerometers (Athletic Data Innovations data. The agreement between treadmill (criterion measure and accelerometer-derived data was examined. We also compared the ability of the different systems to detect the stride imbalance. Biases were small (CT and K and moderate (FT. The typical error of the estimate was trivial (CT, small (K and moderate (FT, with nearly perfect (CT and K and large (FT correlations for treadmill vs. accelerometer. The tape induced very large increase in the right - left foot ∆ in CT, FT and K measured by the treadmill. The tape effect on CT and K ∆ measured with the accelerometers were also very large, but of lower magnitude than with the treadmill. The tape effect on accelerometer-derived ∆ FT was unclear. Present data highlight the potential of a GPS-embedded accelerometer to assess CT and K during ground running.
Buchheit, Martin; Gray, Andrew; Morin, Jean-Benoit
2015-12-01
The aim of the present study was to examine the ability of a GPS-imbedded accelerometer to assess stride variables and vertical stiffness (K), which are directly related to neuromuscular fatigue during field-based high-intensity runs. The ability to detect stride imbalances was also examined. A team sport player performed a series of 30-s runs on an instrumented treadmill (6 runs at 10, 17 and 24 km·h(-1)) with or without his right ankle taped (aimed at creating a stride imbalance), while wearing on his back a commercially-available GPS unit with an embedded 100-Hz tri-axial accelerometer. Contact (CT) and flying (FT) time, and K were computed from both treadmill and accelerometers (Athletic Data Innovations) data. The agreement between treadmill (criterion measure) and accelerometer-derived data was examined. We also compared the ability of the different systems to detect the stride imbalance. Biases were small (CT and K) and moderate (FT). The typical error of the estimate was trivial (CT), small (K) and moderate (FT), with nearly perfect (CT and K) and large (FT) correlations for treadmill vs. accelerometer. The tape induced very large increase in the right - left foot ∆ in CT, FT and K measured by the treadmill. The tape effect on CT and K ∆ measured with the accelerometers were also very large, but of lower magnitude than with the treadmill. The tape effect on accelerometer-derived ∆ FT was unclear. Present data highlight the potential of a GPS-embedded accelerometer to assess CT and K during ground running. Key pointsGPS-embedded tri-axial accelerometers may be used to assess contact time and vertical stiffness during ground running.These preliminary results open new perspective for the field monitoring of neuromuscular fatigue and performance in run-based sports.
Leg heating and cooling influences running stride parameters but not running economy.
Folland, J P; Rowlands, D S; Thorp, R; Walmsley, A
2006-10-01
To evaluate the effect of temperature on running economy (RE) and stride parameters in 10 trained male runners (VO2peak 60.8 +/- 6.8 ml . kg (-1) . min (-1)), we used water immersion as a passive temperature manipulation to contrast localised pre-heating, pre-cooling, and thermoneutral interventions prior to running. Runners completed three 10-min treadmill runs at 70 % VO2peak following 40 min of randomised leg immersion in water at 21.0 degrees C (cold), 34.6 degrees C (thermoneutral), or 41.8 degrees C (hot). Treadmill runs were separated by 7 days. External respiratory gas exchange was measured for 30 s before and throughout the exercise and stride parameters were determined from video analysis in the sagittal plane. RE was not affected by prior heating or cooling with no difference in oxygen cost or energy expenditure between the temperature interventions (average VO2 3rd-10th min of exercise: C, 41.6 +/- 3.4 ml . kg (-1) . min (-1); TN, 41.6 +/- 3.0; H, 41.8 +/- 3.5; p = 0.94). Exercise heart rate was affected by temperature (H > TN > C; p exchange and minute ventilation/oxygen consumption ratios were greater in cold compared with thermoneutral (p economy despite changes in stride parameters that might indicate restricted muscle-tendon elasticity after pre-cooling. Larger changes in stride mechanics than those produced by the current temperature intervention are required to influence running economy.
Stride Leg Ground Reaction Forces Predict Throwing Velocity in Adult Recreational Baseball Pitchers.
McNally, Michael P; Borstad, John D; Oñate, James A; Chaudhari, Ajit M W
2015-10-01
Ground reaction forces produced during baseball pitching have a significant impact in the development of ball velocity. However, the measurement of only one leg and small sample sizes in these studies curb the understanding of ground reaction forces as they relate to pitching. This study aimed to further clarify the role ground reaction forces play in developing pitching velocity. Eighteen former competitive baseball players with previous high school or collegiate pitching experience threw 15 fastballs from a pitcher's mound instrumented to measure ground reaction forces under both the drive and stride legs. Peak ground reaction forces were recorded during each phase of the pitching cycle, between peak knee height and ball release, in the medial/lateral, anterior/posterior, and vertical directions, and the peak resultant ground reaction force. Stride leg ground reaction forces during the arm-cocking and arm-acceleration phases were strongly correlated with ball velocity (r2 = 0.45-0.61), whereas drive leg ground reaction forces showed no significant correlations. Stepwise linear regression analysis found that peak stride leg ground reaction force during the arm-cocking phase was the best predictor of ball velocity (r2 = 0.61) among drive and stride leg ground reaction forces. This study demonstrates the importance of ground reaction force development in pitching, with stride leg forces being strongly predictive of ball velocity. Further research is needed to further clarify the role of ground reaction forces in pitching and to develop training programs designed to improve upper extremity mechanics and pitching performance through effective force development.
Dingwell, Jonathan B; Salinas, Mandy M; Cusumano, Joseph P
2017-06-01
Older adults exhibit increased gait variability that is associated with fall history and predicts future falls. It is not known to what extent this increased variability results from increased physiological noise versus a decreased ability to regulate walking movements. To "walk", a person must move a finite distance in finite time, making stride length (L n ) and time (T n ) the fundamental stride variables to define forward walking. Multiple age-related physiological changes increase neuromotor noise, increasing gait variability. If older adults also alter how they regulate their stride variables, this could further exacerbate that variability. We previously developed a Goal Equivalent Manifold (GEM) computational framework specifically to separate these causes of variability. Here, we apply this framework to identify how both young and high-functioning healthy older adults regulate stepping from each stride to the next. Healthy older adults exhibited increased gait variability, independent of walking speed. However, despite this, these healthy older adults also concurrently exhibited no differences (all p>0.50) from young adults either in how their stride variability was distributed relative to the GEM or in how they regulated, from stride to stride, either their basic stepping variables or deviations relative to the GEM. Using a validated computational model, we found these experimental findings were consistent with increased gait variability arising solely from increased neuromotor noise, and not from changes in stride-to-stride control. Thus, age-related increased gait variability likely precedes impaired stepping control. This suggests these changes may in turn precede increased fall risk. Copyright © 2017 Elsevier B.V. All rights reserved.
Jarrell, Andrea
2013-01-01
In the mad dash to complete the plethora of projects that lead up to the public launch of a campaign, it would be easy to start thinking of the kickoff as a goal in itself, but it's merely a mile marker in the marathon of a fundraising campaign that may last five to 10 years. Given that only a fraction of an institution's constituents may attend a…
Yoo, Won-Gyu
2015-08-01
[Purpose] We developed socks which improve foot sensation and investigated their effect on the velocity and stride length of elderly women crossing obstacles. [Subjects] Ten community-dwelling, elderly women who could walk independently were recruited. [Methods] We measured velocity and stride length using the GAITRite system while the participants crossed obstacles under three conditions: barefoot, wearing ordinary socks, and wearing the socks which improve foot sensation. [Results] Velocity and stride length in bare feet and when wearing the sense-improving socks increased significantly compared to their values when wearing standard socks. Velocity and stride length did not differ between the bare foot and improved sock conditions. [Conclusion] Wearing socks helps protect the foot, but can decrease foot sensory input. Therefore, the socks which improve foot sensation were useful for preventing falls and protecting the feet of the elderly women while they crossed obstacles.
Directory of Open Access Journals (Sweden)
Jens Barth
2015-03-01
Full Text Available Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98% for 40 m walk tests and of 97% (recall 97%, precision 97% for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living.
Esteve-Lanao, Jonathan; Rhea, Matthew R; Fleck, Steven J; Lucia, Alejandro
2008-07-01
The purpose of this study was to determine the effects of a running-specific, periodized strength training program (performed over the specific period [8 weeks] of a 16-week macrocycle) on endurance-trained runners' capacity to maintain stride length during running bouts at competitive speeds. Eighteen well-trained middle-distance runners completed the study (personal bests for 1500 and 5000 m of 3 minutes 57 seconds +/- 12 seconds and 15 minutes 24 seconds +/- 36 seconds). They were randomly assigned to each of the following groups (6 per group): periodized strength group, performing a periodized strength training program over the 8-week specific (intervention) period (2 sessions per week); nonperiodized strength group, performing the same strength training exercises as the periodized group over the specific period but with no week-to-week variations; and a control group, performing no strength training at all during the specific period. The percentage of loss in the stride length (cm)/speed (m.s) (SLS) ratio was measured by comparing the mean SLS during the first and third (last) group of the total repetitions, respectively, included in each of the interval training sessions performed at race speeds during the competition period that followed the specific period. Significant differences (p endurance runners during fatiguing running bouts.
Effect of treadmill versus overground running on the structure of variability of stride timing.
Lindsay, Timothy R; Noakes, Timothy D; McGregor, Stephen J
2014-04-01
Gait timing dynamics of treadmill and overground running were compared. Nine trained runners ran treadmill and track trials at 80, 100, and 120% of preferred pace for 8 min. each. Stride time series were generated for each trial. To each series, detrended fluctuation analysis (DFA), power spectral density (PSD), and multiscale entropy (MSE) analysis were applied to infer the regime of control along the randomness-regularity axis. Compared to overground running, treadmill running exhibited a higher DFA and PSD scaling exponent, as well as lower entropy at non-preferred speeds. This indicates a more ordered control for treadmill running, especially at non-preferred speeds. The results suggest that the treadmill itself brings about greater constraints and requires increased voluntary control. Thus, the quantification of treadmill running gait dynamics does not necessarily reflect movement in overground settings.
Directory of Open Access Journals (Sweden)
Rebecca K MacAulay
2015-03-01
Full Text Available Gait abnormalities are linked to cognitive decline and an increased fall risk within older adults. The present study addressed gaps from cross-sectional studies in the literature by longitudinally examining the interplay between temporal and spatial aspects of gait, cognitive function, age, and lower-extremity strength in elderly fallers and non-fallers. Gait characteristics, neuropsychological and physical test performance were examined at two time points spaced a year apart in cognitively intact individuals aged 60 and older (N = 416. Mixed-model repeated-measure ANCOVAs examined temporal (step time and spatial (stride length gait characteristics during a simple and cognitive-load walking task in fallers as compared to non-fallers. Fallers consistently demonstrated significant alterations in spatial, but not temporal, aspects of gait as compared to non-fallers during both walking tasks. Step time became slower as stride length shortened amongst all participants during the dual task. Shorter strides and slower step times during the dual task were both predicted by worse executive attention/processing speed performance. In summary, divided attention significantly impacts spatial aspects of gait in fallers, suggesting stride length changes may precede declines in other neuropsychological and gait characteristics, thereby selectively increasing fall risk. Our results indicate that multimodal intervention approaches that integrate physical and cognitive remediation strategies may increase the effectiveness of fall risk interventions.
Directory of Open Access Journals (Sweden)
Meihong Wu
2016-01-01
Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.
Is walking a random walk? Evidence for long-range correlations in stride interval of human gait
Hausdorff, Jeffrey M.; Peng, C.-K.; Ladin, Zvi; Wei, Jeanne Y.; Goldberger, Ary L.
1995-01-01
Complex fluctuation of unknown origin appear in the normal gait pattern. These fluctuations might be described as being (1) uncorrelated white noise, (2) short-range correlations, or (3) long-range correlations with power-law scaling. To test these possibilities, the stride interval of 10 healthy young men was measured as they walked for 9 min at their usual rate. From these time series we calculated scaling indexes by using a modified random walk analysis and power spectral analysis. Both indexes indicated the presence of long-range self-similar correlations extending over hundreds of steps; the stride interval at any time depended on the stride interval at remote previous times, and this dependence decayed in a scale-free (fractallike) power-law fashion. These scaling indexes were significantly different from those obtained after random shuffling of the original time series, indicating the importance of the sequential ordering of the stride interval. We demonstrate that conventional models of gait generation fail to reproduce the observed scaling behavior and introduce a new type of central pattern generator model that sucessfully accounts for the experimentally observed long-range correlations.
Spratford, Wayne; Hicks, Amy
2014-01-01
The purpose of this study was to investigate the effect stride length has on ankle biomechanics of the leading leg with reference to the potential risk of injury in cricket fast bowlers. Ankle joint kinematic and kinetic data were collected from 51 male fast bowlers during the stance phase of the final delivery stride. The bowling cohort comprised national under-19, first class and international-level athletes. Bowlers were placed into either Short, Average or Long groups based on final stride length, allowing statistical differences to be measured. A multivariate analysis of variance with a Bonferroni post-hoc correction (α = 0.05) revealed significant differences between peak plantarflexion angles (Short-Long P = 0.005, Average and Long P = 0.04) and negative joint work (Average-Long P = 0.026). This study highlighted that during fast bowling the ankle joint of the leading leg experiences high forces under wide ranges of movement. As stride length increases, greater amounts of negative work and plantarflexion are experienced. These increases place greater loads on the ankle joint and move the foot into positions that make it more susceptible to injuries such as posterior impingement syndrome.
Delextrat, A; Baliqi, F; Clarke, N
2013-04-01
The aim of the study was to investigate the effects of playing an official national-level basketball match on repeated sprint ability (RSA) and stride kinematics. Nine male starting basketball players (22.8±2.2 years old, 191.3±5.8 cm, 88±10.3 kg, 12.3±4.6% body fat) volunteered to take part. Six repetitions of maximal 4-s sprints were performed on a non-motorised treadmill, separated by 21-s of passive recovery, before and immediately after playing an official match. Fluid loss, playing time, and the frequencies of the main match activities were recorded. The peak, mean, and performance decrement for average and maximal speed, acceleration, power, vertical and horizontal forces, and stride parameters were calculated over the six sprints. Differences between pre- and post-match were assessed by student t-tests. Significant differences between pre- and post-tests were observed in mean speed (-3.3%), peak and mean horizontal forces (-4.3% and -17.4%), peak and mean vertical forces (-3.4% and -3.7%), contact time (+7.3%), stride duration (+4.6%) and stride frequency (-4.0%), (Pvertical force were significantly correlated to fluid loss and sprint, jump and shuffle frequencies (P<0.05). These results highlight that the impairment in repeated sprint ability depends on the specific activities performed, and that replacing fluid loss through sweating during a match is crucial.
Serra Braganca, Filipe; Bosch, S; Voskamp, J P; Marin-Perianu, M; Van der Zwaag, B J; Vernooij, J C M; van Weeren, P R; Back, W
BACKGROUND: Inertial-measurement-unit (IMU)-sensor-based techniques are becoming more popular in horses as a tool for objective locomotor assessment. OBJECTIVES: To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU-sensors.
Serra Braganca, F.M.; Vernooij, J.C.M.; René van Weeren, P.; Back, Wim
Reasons for performing study: IMU-sensor based techniques arebecoming more popular in horses as a tool for objective locomotorassessment. Using currently proposed methods only limited informationabout stride variables can be obtained for walk and trot.Objectives: To describe, evaluate and validate a
Braganca, F.M.; Bosch, S.; Voskamp, J.P.; Marin Perianu, Mihai; van der Zwaag, B.J.; Vernooij, J.C.; van Weeren, P.R.; Back, W.
2016-01-01
Background: Inertial measurement unit (IMU) sensor-based techniques are becoming more popular in horses as a tool for objective locomotor assessment. Objectives: To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU sensors.
Estimation of Spatial-Temporal Gait Parameters Using a Low-Cost Ultrasonic Motion Analysis System
Directory of Open Access Journals (Sweden)
Yongbin Qi
2014-08-01
Full Text Available In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
Estimation of spatial-temporal gait parameters using a low-cost ultrasonic motion analysis system.
Qi, Yongbin; Soh, Cheong Boon; Gunawan, Erry; Low, Kay-Soon; Thomas, Rijil
2014-08-20
In this paper, a low-cost motion analysis system using a wireless ultrasonic sensor network is proposed and investigated. A methodology has been developed to extract spatial-temporal gait parameters including stride length, stride duration, stride velocity, stride cadence, and stride symmetry from 3D foot displacements estimated by the combination of spherical positioning technique and unscented Kalman filter. The performance of this system is validated against a camera-based system in the laboratory with 10 healthy volunteers. Numerical results show the feasibility of the proposed system with average error of 2.7% for all the estimated gait parameters. The influence of walking speed on the measurement accuracy of proposed system is also evaluated. Statistical analysis demonstrates its capability of being used as a gait assessment tool for some medical applications.
Green, Carla A; Yarborough, Bobbi Jo H; Leo, Michael C; Yarborough, Micah T; Stumbo, Scott P; Janoff, Shannon L; Perrin, Nancy A; Nichols, Greg A; Stevens, Victor J
2015-01-01
The STRIDE study assessed whether a lifestyle intervention, tailored for individuals with serious mental illnesses, reduced weight and diabetes risk. The authors hypothesized that the STRIDE intervention would be more effective than usual care in reducing weight and improving glucose metabolism. The study design was a multisite, parallel two-arm randomized controlled trial in community settings and an integrated health plan. Participants who met inclusion criteria were ≥18 years old, were taking antipsychotic agents for ≥30 days, and had a body mass index ≥27. Exclusions were significant cognitive impairment, pregnancy/breastfeeding, recent psychiatric hospitalization, bariatric surgery, cancer, heart attack, or stroke. The intervention emphasized moderate caloric reduction, the DASH (Dietary Approaches to Stop Hypertension) diet, and physical activity. Blinded staff collected data at baseline, 6 months, and 12 months. Participants (men, N=56; women, N=144; mean age=47.2 years [SD=10.6]) were randomly assigned to usual care (N=96) or a 6-month weekly group intervention plus six monthly maintenance sessions (N=104). A total of 181 participants (90.5%) completed 6-month assessments, and 170 (85%) completed 12-month assessments, without differential attrition. Participants attended 14.5 of 24 sessions over 6 months. Intent-to-treat analyses revealed that intervention participants lost 4.4 kg more than control participants from baseline to 6 months (95% CI=-6.96 kg to -1.78 kg) and 2.6 kg more than control participants from baseline to 12 months (95% CI=-5.14 kg to -0.07 kg). At 12 months, fasting glucose levels in the control group had increased from 106.0 mg/dL to 109.5 mg/dL and decreased in the intervention group from 106.3 mg/dL to 100.4 mg/dL. No serious adverse events were study-related; medical hospitalizations were reduced in the intervention group (6.7%) compared with the control group (18.8%). Individuals taking antipsychotic medications can lose
Directory of Open Access Journals (Sweden)
Morris David W
2011-09-01
Full Text Available Abstract Background There is a need for novel approaches to the treatment of stimulant abuse and dependence. Clinical data examining the use of exercise as a treatment for the abuse of nicotine, alcohol, and other substances suggest that exercise may be a beneficial treatment for stimulant abuse, with direct effects on decreased use and craving. In addition, exercise has the potential to improve other health domains that may be adversely affected by stimulant use or its treatment, such as sleep disturbance, cognitive function, mood, weight gain, quality of life, and anhedonia, since it has been shown to improve many of these domains in a number of other clinical disorders. Furthermore, neurobiological evidence provides plausible mechanisms by which exercise could positively affect treatment outcomes. The current manuscript presents the rationale, design considerations, and study design of the National Institute on Drug Abuse (NIDA Clinical Trials Network (CTN CTN-0037 Stimulant Reduction Intervention using Dosed Exercise (STRIDE study. Methods/Design STRIDE is a multisite randomized clinical trial that compares exercise to health education as potential treatments for stimulant abuse or dependence. This study will evaluate individuals diagnosed with stimulant abuse or dependence who are receiving treatment in a residential setting. Three hundred and thirty eligible and interested participants who provide informed consent will be randomized to one of two treatment arms: Vigorous Intensity High Dose Exercise Augmentation (DEI or Health Education Intervention Augmentation (HEI. Both groups will receive TAU (i.e., usual care. The treatment arms are structured such that the quantity of visits is similar to allow for equivalent contact between groups. In both arms, participants will begin with supervised sessions 3 times per week during the 12-week acute phase of the study. Supervised sessions will be conducted as one-on-one (i.e., individual sessions
Endüstride Su Güvenliği, Dezenfeksiyon ve Sanitasyonu
Directory of Open Access Journals (Sweden)
Ayla ÜNVER
2011-01-01
Full Text Available Gelişmekte olan ülkelerde, her yıl suyla bulaşan hastalıklar yüzünden milyonlarca kişi ölmekte, milyarlarca kişi de hasta olmaktadır. Dünya nüfusunun artması sonucu, içme ve kullanma suyu ihtiyacı hızla artmıştır. Ayrıca çevre kirliliği su kaynaklarında kirlenmeye sebep olmuştur. Su endüstride en çok kullanılan hammaddelerden biridir. Su kalitesi standartları ülke yönetimleri ve uluslararası standartlarca belirlenir. Suyun saflaştırılması; istenmeyen kimyasalların, diğer materyallerin ve biyolojik kontaminantların sudan uzaklaştırılması prosesidir. Su saflaştırma, sağlıklı dağıtım sistemleri, su dezenfeksiyon prosesleri, medikal, gıda sektörü, kimyasal ve endüstriyel uygulamalar için önemli gerekliliklerdir.
Lieberman, Daniel E; Warrener, Anna G; Wang, Justin; Castillo, Eric R
2015-11-01
Endurance runners are often advised to use 90 strides min(-1), but how optimal is this stride frequency and why? Endurance runners are also often advised to maintain short strides and avoid landing with the feet too far in front of their hips or knees (colloquially termed 'overstriding'), but how do different kinematic strategies for varying stride length at the same stride frequency affect economy and impact peaks? Linear mixed models were used to analyze repeated measures of stride frequency, the anteroposterior position of the foot at landing, V̇O2 , lower extremity kinematics and vertical ground reaction forces in 14 runners who varied substantially in height and body mass and who were asked to run at 75, 80, 85, 90 and 95 strides min(-1) at 3.0 m s(-1). For every increase of 5 strides min(-1), maximum hip flexor moments in the sagittal plane increased by 5.8% (Pbraking forces were associated with increases in foot landing position relative to the hip (P=0.0005) but not the knee (P=0.54); increases in foot landing position relative to the knee were associated with higher magnitudes (Pbraking forces versus maximum hip flexor moments during swing. The results suggest that runners may benefit from a stride frequency of approximately 85 strides min(-1) and by landing at the end of swing phase with a relatively vertical tibia. © 2015. Published by The Company of Biologists Ltd.
Estimating the Modified Allan Variance
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
Cable, Ritchard G; Birch, Rebecca J; Spencer, Bryan R; Wright, David J; Bialkowski, Walter; Kiss, Joseph E; Rios, Jorge; Bryant, Barbara J; Mast, Alan E
2017-10-01
Donor behaviors in STRIDE (Strategies to Reduce Iron Deficiency), a trial to reduce iron deficiency, were examined. Six hundred ninety-two frequent donors were randomized to receive either 19 or 38 mg iron for 60 days or an educational letter based on their predonation ferritin. Compliance with assigned pills, response to written recommendations, change in donation frequency, and future willingness to take iron supplements were examined. Donors who were randomized to receive iron pills had increased red blood cell donations and decreased hemoglobin deferrals compared with controls or with pre-STRIDE donations. Donors who were randomized to receive educational letters had fewer hemoglobin deferrals compared with controls. Of those who received a letter advising of low ferritin levels with recommendations to take iron supplements or delay future donations, 57% reported that they initiated iron supplementation, which was five times as many as those who received letters lacking a specific recommendation. The proportion reporting delayed donation was not statistically different (32% vs. 20%). Of donors who were assigned pills, 58% reported taking them "frequently," and forgetting was the primary reason for non-compliance. Approximately 80% of participants indicated that they would take iron supplements if provided by the center. Donors who were assigned iron pills had acceptable compliance, producing increased red blood cell donations and decreased low hemoglobin deferrals compared with controls or with pre-STRIDE rates. The majority of donors assigned to an educational letter took action after receiving a low ferritin result, with more donors choosing to take iron than delay donation. Providing donors with information on iron status with personalized recommendations was an effective alternative to directly providing iron supplements. © 2017 AABB.
Nascimento, Lucas R; de Oliveira, Camila Quel; Ada, Louise; Michaelsen, Stella M; Teixeira-Salmela, Luci F
2015-01-01
After stroke, is walking training with cueing of cadence superior to walking training alone in improving walking speed, stride length, cadence and symmetry? Systematic review with meta-analysis of randomised or controlled trials. Adults who have had a stroke. Walking training with cueing of cadence. Four walking outcomes were of interest: walking speed, stride length, cadence and symmetry. This review included seven trials involving 211 participants. Because one trial caused substantial statistical heterogeneity, meta-analyses were conducted with and without this trial. Walking training with cueing of cadence improved walking speed by 0.23 m/s (95% CI 0.18 to 0.27, I(2)=0%), stride length by 0.21 m (95% CI 0.14 to 0.28, I(2)=18%), cadence by 19 steps/minute (95% CI 14 to 23, I(2)=40%), and symmetry by 15% (95% CI 3 to 26, random effects) more than walking training alone. This review provides evidence that walking training with cueing of cadence improves walking speed and stride length more than walking training alone. It may also produce benefits in terms of cadence and symmetry of walking. The evidence appears strong enough to recommend the addition of 30 minutes of cueing of cadence to walking training, four times a week for 4 weeks, in order to improve walking in moderately disabled individuals with stroke. PROSPERO (CRD42013005873). Copyright © 2014 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.
Bragança, F M; Bosch, S; Voskamp, J P; Marin-Perianu, M; Van der Zwaag, B J; Vernooij, J C M; van Weeren, P R; Back, W
2017-07-01
Inertial measurement unit (IMU) sensor-based techniques are becoming more popular in horses as a tool for objective locomotor assessment. To describe, evaluate and validate a method of stride detection and quantification at walk and trot using distal limb mounted IMU sensors. Prospective validation study comparing IMU sensors and motion capture with force plate data. A total of seven Warmblood horses equipped with metacarpal/metatarsal IMU sensors and reflective markers for motion capture were hand walked and trotted over a force plate. Using four custom built algorithms hoof-on/hoof-off timing over the force plate were calculated for each trial from the IMU data. Accuracy of the computed parameters was calculated as the mean difference in milliseconds between the IMU or motion capture generated data and the data from the force plate, precision as the s.d. of these differences and percentage of error with accuracy of the calculated parameter as a percentage of the force plate stance duration. Accuracy, precision and percentage of error of the best performing IMU algorithm for stance duration at walk were 28.5, 31.6 ms and 3.7% for the forelimbs and -5.5, 20.1 ms and -0.8% for the hindlimbs, respectively. At trot the best performing algorithm achieved accuracy, precision and percentage of error of -27.6/8.8 ms/-8.4% for the forelimbs and 6.3/33.5 ms/9.1% for the hindlimbs. The described algorithms have not been assessed on different surfaces. Inertial measurement unit technology can be used to determine temporal kinematic stride variables at walk and trot justifying its use in gait and performance analysis. However, precision of the method may not be sufficient to detect all possible lameness-related changes. These data seem promising enough to warrant further research to evaluate whether this approach will be useful for appraising the majority of clinically relevant gait changes encountered in practice. © 2016 The Authors. Equine Veterinary Journal published by
Directory of Open Access Journals (Sweden)
Christian Mitschke
2018-01-01
Full Text Available Previous studies have used accelerometers with various operating ranges (ORs when measuring biomechanical parameters. However, it is still unclear whether ORs influence the accuracy of running parameters, and whether the different stiffnesses of footwear midsoles influence this accuracy. The purpose of the present study was to systematically investigate the influence of OR on the accuracy of stride length, running velocity, and on peak tibial acceleration. Twenty-one recreational heel strike runners ran on a 15-m indoor track at self-selected running speeds in three footwear conditions (low to high midsole stiffness. Runners were equipped with an inertial measurement unit (IMU affixed to the heel cup of the right shoe and with a uniaxial accelerometer at the right tibia. Accelerometers (at the tibia and included in the IMU with a high OR of ±70 g were used as the reference and the data were cut at ±32, ±16, and at ±8 g in post-processing, before calculating parameters. The results show that the OR influenced the outcomes of all investigated parameters, which were not influenced by tested footwear conditions. The lower ORs were associated with an underestimation error for all biomechanical parameters, which increased noticeably with a decreasing OR. It can be concluded that accelerometers with a minimum OR of ±32 g should be used to avoid inaccurate measurements.
Otsuki, Risa; Matsumoto, Hiromi; Ueki, Masaru; Uehara, Kazutake; Nozawa, Nobuko; Osaki, Mari; Hagino, Hiroshi
2016-01-01
[Purpose] The aim of this study was to clarify the effects of an automated stride assistance device on gait parameters and energy cost during walking performed by healthy middle-aged and young females. [Subjects and Methods] Ten middle-aged females and 10 young females were recruited as case and control participants, respectively. The participants walked for 3 minutes continuously under two different experimental conditions: with the device and without the device. Walking distance, mean walking speed, mean step length, cadence, walk ratio and the physiological cost index during the 3-minutes walk were measured. [Results] When walking with the stride assistance device, the step length and walk ratio of the middle-aged group were significantly higher than without it. Also, during walking without assistance from the device, the physiological cost index of the middle-aged group significantly increased; whereas during walking with assistance, there was no change. The intergroup comparison in the middle-aged group showed the physiological cost index was lower under the experimental condition with assistance provided, as opposed to the condition without the provision of assistance. [Conclusion] The results of this study show that the stride assistance device improved the gait parameters of the middle-aged group but not those of young controls. PMID:28174452
Burnfield, Judith M; Buster, Thad W; Goldman, Amy J; Corbridge, Laura M; Harper-Hanigan, Kellee
2016-06-01
Intensive task-specific training is promoted as one approach for facilitating neural plastic brain changes and associated motor behavior gains following neurologic injury. Partial body weight support treadmill training (PBWSTT), is one task-specific approach frequently used to improve walking during the acute period of stroke recovery (training parameters and physiologic demands during this early recovery phase. To examine the impact of four walking speeds on stride characteristics, lower extremity muscle demands (both paretic and non-paretic), Borg ratings of perceived exertion (RPE), and blood pressure. A prospective, repeated measures design was used. Ten inpatients post unilateral stroke participated. Following three familiarization sessions, participants engaged in PBWSTT at four predetermined speeds (0.5, 1.0, 1.5 and 2.0mph) while bilateral electromyographic and stride characteristic data were recorded. RPE was evaluated immediately following each trial. Stride length, cadence, and paretic single limb support increased with faster walking speeds (p⩽0.001), while non-paretic single limb support remained nearly constant. Faster walking resulted in greater peak and mean muscle activation in the paretic medial hamstrings, vastus lateralis and medial gastrocnemius, and non-paretic medial gastrocnemius (p⩽0.001). RPE also was greatest at the fastest compared to two slowest speeds (ptraining at the slowest speeds. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimating Stair Running Performance Using Inertial Sensors
Directory of Open Access Journals (Sweden)
Lauro V. Ojeda
2017-11-01
Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.
Ko, Linda K; Rillamas-Sun, Eileen; Bishop, Sonia; Cisneros, Oralia; Holte, Sarah; Thompson, Beti
2018-04-01
Hispanic children are disproportionally overweight and obese compared to their non-Hispanic white counterparts in the US. Community-wide, multi-level interventions have been successful to promote healthier nutrition, increased physical activity (PA), and weight loss. Using community-based participatory approach (CBPR) that engages community members in rural Hispanic communities is a promising way to promote behavior change, and ultimately weight loss among Hispanic children. Led by a community-academic partnership, the Together We STRIDE (Strategizing Together Relevant Interventions for Diet and Exercise) aims to test the effectiveness of a community-wide, multi-level intervention to promote healthier diets, increased PA, and weight loss among Hispanic children. The Together We STRIDE is a parallel quasi-experimental trial with a goal of recruiting 900 children aged 8-12 years nested within two communities (one intervention and one comparison). Children will be recruited from their respective elementary schools. Components of the 2-year multi-level intervention include comic books (individual-level), multi-generational nutrition and PA classes (family-level), teacher-led PA breaks and media literacy education (school-level), family nights, a farmer's market and a community PA event (known as ciclovia) at the community-level. Children from the comparison community will receive two newsletters. Height and weight measures will be collected from children in both communities at three time points (baseline, 6-months, and 18-months). The Together We STRIDE study aims to promote healthier diet and increased PA to produce healthy weight among Hispanic children. The use of CBPR approach and the engagement of the community will springboard strategies for intervention' sustainability. Clinical Trials Registration Number: NCT02982759 Retrospectively registered. Copyright © 2018 Elsevier Inc. All rights reserved.
Pemu, Priscilla E; Quarshie, Alexander Q; Josiah-Willock, R; Ojutalayo, Folake O; Alema-Mensah, Ernest; Ofili, Elizabeth O
2011-01-01
Diabetes self-management (DSM) training helps prevent diabetic complications. eHealth approaches may improve its optimal use. The aims were to determine a) acceptability of e-HealthyStrides© (an interactive, Internet-based, patient-driven, diabetes self-management support and social networking program) among Morehouse Community Physicians' Network diabetics; b) efficacy for DSM behavior change c) success factors for use of e-HealthyStrides©. Baseline characteristics of pilot study participants are reported. Of those approached, 13.8% agreed to participate. Among participants, 96% were Black, 77% female; age 56±9.2 years; education: 44% college or higher and 15% less than 12th grade; 92.5% with home computers. Over half (51%) failed the Diabetes Knowledge Test. Nearly half (47%) were at goal A1C; 24% at goal blood pressure; 3% at goal LDL cholesterol level. Median (SD) Diabetes Empowerment Scale score = 3.93 (0.72) but managing psychosocial aspects = 3.89 (0.89) scored lower than other domains. There was low overall confidence for DSM behaviors. Assistance with healthy eating was the most frequently requested service. Requestors were more obese with worse A1C than others. Chronic care delivery scored average with high scores for counseling and problem solving but low scores for care coordination and follow up.
Walker, Robrina; Morris, David W; Greer, Tracy L; Trivedi, Madhukar H
2014-01-01
Descriptions of and recommendations for meeting the challenges of training research staff for multisite studies are limited despite the recognized importance of training on trial outcomes. The STRIDE (STimulant Reduction Intervention using Dosed Exercise) study is a multisite randomized clinical trial that was conducted at nine addiction treatment programs across the United States within the National Drug Abuse Treatment Clinical Trials Network (CTN) and evaluated the addition of exercise to addiction treatment as usual (TAU), compared to health education added to TAU, for individuals with stimulant abuse or dependence. Research staff administered a variety of measures that required a range of interviewing, technical, and clinical skills. In order to address the absence of information on how research staff are trained for multisite clinical studies, the current manuscript describes the conceptual process of training and certifying research assistants for STRIDE. Training was conducted using a three-stage process to allow staff sufficient time for distributive learning, practice, and calibration leading up to implementation of this complex study. Training was successfully implemented with staff across nine sites. Staff demonstrated evidence of study and procedural knowledge via quizzes and skill demonstration on six measures requiring certification. Overall, while the majority of staff had little to no experience in the six measures, all research assistants demonstrated ability to correctly and reliably administer the measures throughout the study. Practical recommendations are provided for training research staff and are particularly applicable to the challenges encountered with large, multisite trials.
Yarborough, Bobbi Jo H; Leo, Michael C; Yarborough, Micah T; Stumbo, Scott; Janoff, Shannon L; Perrin, Nancy A; Green, Carla A
2016-03-01
The authors examined secondary outcomes of STRIDE, a randomized controlled trial that tested a weight-loss and lifestyle intervention for individuals taking antipsychotic medications. Hierarchical linear regression was used to explore the effects of the intervention and weight change at follow-up (six, 12, and 24 months) on body image, perceived health, and health-related self-efficacy. Participants were 200 adults who were overweight and taking antipsychotic agents. Weight change × study arm interaction was associated with significant improvement in body image from baseline to six months. From baseline to 12 months, body image scores of intervention participants improved by 1.7 points more compared with scores of control participants; greater weight loss was associated with more improvement. Between baseline and 24 months, greater weight loss was associated with improvements in body image, perceived health, and health-related self-efficacy. Participation in STRIDE improved body image, and losing weight improved perceived health and health-related self-efficacy.
Directory of Open Access Journals (Sweden)
Brian J Gow
Full Text Available To determine if Tai Chi (TC has an impact on long-range correlations and fractal-like scaling in gait stride time dynamics, previously shown to be associated with aging, neurodegenerative disease, and fall risk.Using Detrended Fluctuation Analysis (DFA, this study evaluated the impact of TC mind-body exercise training on stride time dynamics assessed during 10 minute bouts of overground walking. A hybrid study design investigated long-term effects of TC via a cross-sectional comparison of 27 TC experts (24.5 ± 11.8 yrs experience and 60 age- and gender matched TC-naïve older adults (50-70 yrs. Shorter-term effects of TC were assessed by randomly allocating TC-naïve participants to either 6 months of TC training or to a waitlist control. The alpha (α long-range scaling coefficient derived from DFA and gait speed were evaluated as outcomes.Cross-sectional comparisons using confounder adjusted linear models suggest that TC experts exhibited significantly greater long-range scaling of gait stride time dynamics compared with TC-naïve adults. Longitudinal random-slopes with shared baseline models accounting for multiple confounders suggest that the effects of shorter-term TC training on gait dynamics were not statistically significant, but trended in the same direction as longer-term effects although effect sizes were very small. In contrast, gait speed was unaffected in both cross-sectional and longitudinal comparisons.These preliminary findings suggest that fractal-like measures of gait health may be sufficiently precise to capture the positive effects of exercise in the form of Tai Chi, thus warranting further investigation. These results motivate larger and longer-duration trials, in both healthy and health-challenged populations, to further evaluate the potential of Tai Chi to restore age-related declines in gait dynamics.The randomized trial component of this study was registered at ClinicalTrials.gov (NCT01340365.
Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea
2015-11-23
In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear
DEFF Research Database (Denmark)
Svendsen, Morten Bo Søndergaard; Domenici, Paolo; Marras, Stefano
2016-01-01
Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s(-1) but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish......, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s(-1)), followed by barracuda (6.2±1.0 m s(-1)), little tunny (5.6±0.2 m s(-1)) and dorado (4...
Directory of Open Access Journals (Sweden)
Morten B. S. Svendsen
2016-10-01
Full Text Available Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1, followed by barracuda (6.2±1.0 m s−1, little tunny (5.6±0.2 m s−1 and dorado (4.0±0.9 m s−1; although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues.
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
IMU-based ambulatory walking speed estimation in constrained treadmill and overground walking.
Yang, Shuozhi; Li, Qingguo
2012-01-01
This study evaluated the performance of a walking speed estimation system based on using an inertial measurement unit (IMU), a combination of accelerometers and gyroscopes. The walking speed estimation algorithm segments the walking sequence into individual stride cycles (two steps) based on the inverted pendulum-like behaviour of the stance leg during walking and it integrates the angular velocity and linear accelerations of the shank to determine the displacement of each stride. The evaluation was performed in both treadmill and overground walking experiments with various constraints on walking speed, step length and step frequency to provide a relatively comprehensive assessment of the system. Promising results were obtained in providing accurate and consistent walking speed/step length estimation in different walking conditions. An overall percentage root mean squared error (%RMSE) of 4.2 and 4.0% was achieved in treadmill and overground walking experiments, respectively. With an increasing interest in understanding human walking biomechanics, the IMU-based ambulatory system could provide a useful walking speed/step length measurement/control tool for constrained walking studies.
Strides in Preservation of Malawi's Natural Stone
Kamanga, Tamara; Chisenga, Chikondi; Katonda, Vincent
2017-04-01
The geology of Malawi is broadly grouped into four main lithological units that is the Basement Complex, the Karoo Super group, Tertiary to Quaternary sedimentary deposits and the Chilwa Alkaline province. The basement complex rocks cover much of the country and range in age from late Precambrian to early Paleozoic. They have been affected by three major phases of deformation and metamorphism that is the Irumide, Ubendian and The Pan-African. These rocks comprise gneisses, granulites and schists with associated mafic, ultramafic, syenites and granite rocks. The Karoo System sedimentary rocks range in age from Permian to lower Jurassic and are mainly restricted to two areas in the extreme North and extreme Alkaline Province - late Jurassic to Cretaceous in age, preceded by upper Karoo Dolerite dyke swarms and basaltic lavas, have been intruded into the Basement Complex gneisses of southern Malawi. Malawi is endowed with different types of natural stone deposits most of which remain unexploited and explored. Over twenty quarry operators supply quarry stone for road and building construction in Malawi. Hundreds of artisanal workers continue to supply aggregate stones within and on the outskirts of urban areas. Ornamental stones and granitic dimension stones are also quarried, but in insignificant volumes. In Northern Malawi, there are several granite deposits including the Nyika, which is the largest single outcrop occupying approximately 260.5 km2 , Mtwalo Amazonite an opaque to translucent bluish -green variety of microcline feldspar that occurs in alkali granites and pegmatite, the Ilomba granite (sodalite) occurring in small areas within biotite; apatite, plagioclase and calcite. In the Center, there are the Dzalanyama granites, and the Sani granites. In the South, there are the Mangochi granites. Dolerite and gabbroic rocks spread across the country, treading as black granites. Malawi is also endowed with many deposits of marble. A variety of other igneous, metamorphic and sedimentary rocks are also used as dimension stones. Discovery and preservation of more natural stone deposits through research is essential in the country .Natural stone preservation has not only the potential to generate significant direct and indirect economic benefits for Malawi but also to preserve its heritage .
Personalized medicine: Striding from genes to medicines.
Nair, Sunita R
2010-10-01
Personalized medicine has the potential of revolutionizing patient care. This treatment modality prescribes therapies specific to individual patients based on pharmacogenetic and pharmacogenomic information. The mapping of the human genome has been an important milestone in understanding the interindividual differences in response to therapy. These differences are attributed to genotypic differences, with consequent phenotypic expression. It is important to note that targeted therapies should ideally be accompanied by a diagnostic marker. However, most efforts are being directed toward developing both these separately; the former by pharmaceutical companies and the later by diagnostic companies. Further, this companion strategy will be successful only when the biomarkers assayed are differentiated on a value-based approach rather than a cost-based approach, especially in countries that reimburse disease management costs. The advantages of using personalized therapies are manifold: targeted patient population; avoidance of drug-related toxicities and optimization of costs in nonresponder patients; reduction in drug development costs, and fewer patients to be tested in clinical trials. The success of personalized therapy in future will depend on a better understanding of pharmacogenomics and the extension of these scientific advances to all countries.
Personalized medicine: Striding from genes to medicines
Sunita R Nair
2010-01-01
Personalized medicine has the potential of revolutionizing patient care. This treatment modality prescribes therapies specific to individual patients based on pharmacogenetic and pharmacogenomic information. The mapping of the human genome has been an important milestone in understanding the interindividual differences in response to therapy. These differences are attributed to genotypic differences, with consequent phenotypic expression. It is important to note that targeted therapies should...
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones....
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Directory of Open Access Journals (Sweden)
Julius Hannink
2017-08-01
Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.
Zhang, Wenqing; Qiu, Lu; Xiao, Qin; Yang, Huijie; Zhang, Qingjun; Wang, Jianyong
2012-11-01
By means of the concept of the balanced estimation of diffusion entropy, we evaluate the reliable scale invariance embedded in different sleep stages and stride records. Segments corresponding to waking, light sleep, rapid eye movement (REM) sleep, and deep sleep stages are extracted from long-term electroencephalogram signals. For each stage the scaling exponent value is distributed over a considerably wide range, which tell us that the scaling behavior is subject and sleep cycle dependent. The average of the scaling exponent values for waking segments is almost the same as that for REM segments (˜0.8). The waking and REM stages have a significantly higher value of the average scaling exponent than that for light sleep stages (˜0.7). For the stride series, the original diffusion entropy (DE) and the balanced estimation of diffusion entropy (BEDE) give almost the same results for detrended series. The evolutions of local scaling invariance show that the physiological states change abruptly, although in the experiments great efforts have been made to keep conditions unchanged. The global behavior of a single physiological signal may lose rich information on physiological states. Methodologically, the BEDE can evaluate with considerable precision the scale invariance in very short time series (˜102), while the original DE method sometimes may underestimate scale-invariance exponents or even fail in detecting scale-invariant behavior. The BEDE method is sensitive to trends in time series. The existence of trends may lead to an unreasonably high value of the scaling exponent and consequent mistaken conclusions.
Joint Estimation Using Quadratic Estimating Function
Directory of Open Access Journals (Sweden)
Y. Liang
2011-01-01
of the observed process becomes available, the quadratic estimating functions are more informative. In this paper, a general framework for joint estimation of conditional mean and variance parameters in time series models using quadratic estimating functions is developed. Superiority of the approach is demonstrated by comparing the information associated with the optimal quadratic estimating function with the information associated with other estimating functions. The method is used to study the optimal quadratic estimating functions of the parameters of autoregressive conditional duration (ACD models, random coefficient autoregressive (RCA models, doubly stochastic models and regression models with ARCH errors. Closed-form expressions for the information gain are also discussed in some detail.
A Comparison between Different Methods of Estimating Anaerobic Energy Production
Andersson, Erik P.; McGawley, Kerry
2018-01-01
Purpose: The present study aimed to compare four methods of estimating anaerobic energy production during supramaximal exercise. Methods: Twenty-one junior cross-country skiers competing at a national and/or international level were tested on a treadmill during uphill (7°) diagonal-stride (DS) roller-skiing. After a 4-minute warm-up, a 4 × 4-min continuous submaximal protocol was performed followed by a 600-m time trial (TT). For the maximal accumulated O2 deficit (MAOD) method the V.O2-speed regression relationship was used to estimate the V.O2 demand during the TT, either including (4+Y, method 1) or excluding (4-Y, method 2) a fixed Y-intercept for baseline V.O2. The gross efficiency (GE) method (method 3) involved calculating metabolic rate during the TT by dividing power output by submaximal GE, which was then converted to a V.O2 demand. An alternative method based on submaximal energy cost (EC, method 4) was also used to estimate V.O2 demand during the TT. Results: The GE/EC remained constant across the submaximal stages and the supramaximal TT was performed in 185 ± 24 s. The GE and EC methods produced identical V.O2 demands and O2 deficits. The V.O2 demand was ~3% lower for the 4+Y method compared with the 4-Y and GE/EC methods, with corresponding O2 deficits of 56 ± 10, 62 ± 10, and 63 ± 10 mL·kg−1, respectively (P estimated O2 deficits were −6 ± 5 mL·kg−1 (4+Y vs. 4-Y, P estimated with GE/EC based on the average of four submaximal stages compared with the last stage was 1 ± 2 mL·kg−1, with a typical error of 3.2%. Conclusions: These findings demonstrate a disagreement in the O2 deficits estimated using current methods. In addition, the findings suggest that a valid estimate of the O2 deficit may be possible using data from only one submaximal stage in combination with the GE/EC method. PMID:29472871
On nonparametric hazard estimation.
Hobbs, Brian P
The Nelson-Aalen estimator provides the basis for the ubiquitous Kaplan-Meier estimator, and therefore is an essential tool for nonparametric survival analysis. This article reviews martingale theory and its role in demonstrating that the Nelson-Aalen estimator is uniformly consistent for estimating the cumulative hazard function for right-censored continuous time-to-failure data.
Canter, David; Tagg, Stephen K.
1975-01-01
The results of eleven distance estimation studies made in seven cities and five countries are reported. Distances were estimated between various points within the cities in which the subjects were resident. In general, undergraduate residents' distance estimates correlated highly with actual distance, but the nonundergraduate group's did not.…
DEFF Research Database (Denmark)
2015-01-01
the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...
Optimal fault signal estimation
Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.
2002-01-01
We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...
Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2016-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548
Categorical working memory representations are used in delayed estimation of continuous colors.
Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J
2017-01-01
In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Unravelling Copenhagen's stride into the Anthropocene using lake sediments
Schreiber, Norman; Andersen, Thorbjørn J.; Frei, Robert; Ilsøe, Peter; Louchouarn, Patrick; Andersen, Kenneth; Funder, Svend; Rasmussen, Peter; Andresen, Camilla S.; Odgaard, Bent; Kjær, Kurt H.
2014-05-01
Industrialization including the effects of expanding energy consumption and metallurgy production as well as population growth and demographic pressure increased heavy-metal pollution loads progressively since the Industrial Revolution. Especially the burning of fossil fuels mobilizes heavy metals like lead and zinc on a large scale. By wet and dry deposition, these loads end up in the aquatic environment where sediments serve as sinks for these contaminations. In this study, we examine the pollution history of Copenhagen, Denmark. A sediment core was retrieved for the lake in the Botanical Gardens in central Copenhagen using a rod-operated piston corer. The water body used to be part of the old town's defence-wall system and was turned into a lake by terrain levelling in the mid 17th century. After initial X-ray fluorescence core scanning, element concentrations were determined using emission spectroscopy. The onset of gyttja accumulation in the lake is assumed to start immediately after the construction of the fortification in approximately AD 1645. An age model representing the last approximately 135 years for the uppermost 60cm was established by lead-210 and cesium-137 dating. The older part was dated via recognition of markedly increased levels of levoglucosan which are interpreted to be linked with recorded fires in Copenhagen. Similarly, two distinct layers interstratify the sediment column and mark pronounced increases of minerogenic material inflow which can be linked to known historical events. Significant pollution load increases are evident from the 1700s along with urban growth and extended combustion of carbon carriers fuels such as wood and coals. However, a more pronounced increase in lead and zinc deposition only begins by the mid-19th century. Maxima for the latter two pollutants are reached in the late 1970s followed by a reduction of emissions in accordance with stricter environmental regulations. Here, especially the phasing-out of tetraethyl lead from gasoline and increased cleaning of the emissions from local power plants have had an effect. Also a change of fuel from coal to natural gas in the power plants has been very important. The present study shows how a detailed record of past levels of air pollution in large cities may be achieved by analyzing the sediment accumulated in urban lakes provided that a reliable chronology can be established.
The Worker Rights Consortium Makes Strides toward Legitimacy.
Van der Werf, Martin
2000-01-01
Discusses the rapid growth of the Workers Rights Consortium, a student-originated group with 44 member institutions which opposes sweatshop labor conditions especially in the apparel industry. Notes disagreements about the number of administrators on the board of directors and about the role of industry representives. Compares this group with the…
Stride length asymmetry in split-belt locomotion
Hoogkamer, W.; Bruijn, S.M.; Duysens, J.
2013-01-01
The number of studies utilizing a split-belt treadmill is rapidly increasing in recent years. This has led to some confusion regarding the definitions of reported gait parameters. The purpose of this paper is to clearly present the definitions of the gait parameters that are commonly used in
TAKING MULTI MODE RESEARCH STRIDES DURING THEINNOVATION OF ACRICKETCOMPETITIVE INTELLIGENCEFRAMEWORK
Directory of Open Access Journals (Sweden)
Liandi van den Berg
2017-01-01
Full Text Available This paperdescribesthemulti-mode research methodological stepsduring thedevelopment of a competitive intelligence (CIframework forcricketcoaches.Currently no framework exist to guide coaches to gain a competitive advantagethrough competitor analysis.Asystematic literature review (SLR ascertainedthesimilarities and differences betweenthebusiness CI and sport coaching andperformance analysis(PAdomains.The qualitative document analysisperformedinATLAS.TITMrendered a reputable inter-and intra-document analysis validitywith #954; =0.79 and 0.78 respectively. Thedocument analysiscontributedtowardsthe compilation ofa semi-structured interview schedule to investigate thebusiness-related CI process occurrence within the sport coaching context. Theinterview schedule was finalised afteruniversity-peers’interviewsprovided inputon the proposed schedule.Thereafter data collection entailedsemi-structuredinterviews with high-level cricket coachesand support staffonCI activities intheir coaching practices.The coach interviews wereverbatimtranscribed andanalysed with ATLAS.TITM.A codebook of the codescreatedin the analysis wascompiled.The researcherestablished the inter-and intra-reliability with a Cohens’Kappa of 0.8. A constant comparative method of data analysisguided theanalysis,whichwas performeduntildata saturationwas reached. The4338interview code incidenceswere quantitized #8210;theconversion of qualitative datatonumerical data.Acoefficient cluster analyses onallindices detectedclusterswitha linkage distanceset at fourwas performed,from which five themes emerged.The71codes were conceptually concatenated into28categories, linked to the fivedifferent themes. The multi-method research design rendered a conceptual andapplicableCIframework for cricket coaches.
Hope and major strides for genetic diseases of the eye
Indian Academy of Sciences (India)
2009-12-31
Dec 31, 2009 ... versible vision loss. This therapeutic model will undoubtedly be used in other retinal disorders. With the current detailed delineation of the clinical mani- festations of genetic eye diseases, and with the availability of precise gene testing, sophisticated retinal imaging and elec- trophysiologic testing modalities ...
Striding Toward Social Justice: The Ecologic Milieu of Physical Activity
Lee, Rebecca E.; Cubbin, Catherine
2009-01-01
Disparities in physical activity should be investigated in light of social justice principles. This manuscript critically evaluates evidence and trends in disparities research within an ecologic framework, focusing on multi-level factors such as neighborhood and racial discrimination that influence physical activity. Discussion focuses on strategies for integrating social justice into physical activity promotion and intervention programming within an ecologic framework.
Feel your stride and find your preferred running speed
Directory of Open Access Journals (Sweden)
Thibault Lussiana
2016-01-01
Full Text Available There is considerable inter-individual variability in self-selected intensity or running speed. Metabolic cost per distance has been recognized as a determinant of this personal choice. As biomechanical parameters have been connected to metabolic cost, and as different running patterns exist, we can question their possible determinant roles in self-selected speed. We examined the self-selected speed of 15 terrestrial and 16 aerial runners, with comparable characteristics, on a 400 m track and assessed biomechanical parameters and ratings of pleasure/displeasure. The results revealed that aerial runners choose greater speeds associated with shorter contact time, longer flight time, and higher leg stiffness than terrestrial runners. Pleasure was negatively correlated with contact time and positively with leg stiffness in aerial runners and was negatively correlated with flight time in terrestrial runners. We propose the existence of an optimization system allowing the connection of running patterns at running speeds, and feelings of pleasure or displeasure.
Hope and major strides for genetic diseases of the eye
Indian Academy of Sciences (India)
2009-12-31
Dec 31, 2009 ... There have been dramatic advances in the elucidation of the genetic etiology of inherited eye diseases and their underly- ing pathophysiology in the last two to three decades. This was made possible by the exponential development of pow- erful molecular biology instrumentation and techniques, the.
Eradication of campus cultism: a giant stride toward restoration of ...
African Journals Online (AJOL)
In view of this, this paper focuses on campus cult and proposed that its eradication will lead to a restoration of confidence in education in Nigeria. The paper started by looking at the meaning of cultism, the history of campus cult, psychological assumptions for joining cult gang, depicting characteristics of cult groups, the ...
Finding Our Stride: Young Women Professors of Educational Leadership
Hewcomb, Whitney Sherman; Beaty, Danna M.; Sanzo, Karen; Peters-Hawkins, April
2013-01-01
This work is grounded in the literature on women in the academy and offers glimpses into four young women professors' experiences in the field of educational leadership. We utilized reflective practice and interpersonal communication to create a dialogue centered on three qualitative research questions that allows a window into our lives. We…
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Dingwall, Heather L; Hatala, Kevin G; Wunderlich, Roshna E; Richmond, Brian G
2013-06-01
The early Pleistocene marks a period of major transition in hominin body form, including increases in body mass and stature relative to earlier hominins. However, because complete postcranial fossils with reliable taxonomic attributions are rare, efforts to estimate hominin mass and stature are complicated by the frequent albeit necessary use of isolated, and often fragmentary, skeletal elements. The recent discovery of 1.52 million year old hominin footprints from multiple horizons in Ileret, Kenya, provides new data on the complete foot size of early Pleistocene hominins as well as stride lengths and other characteristics of their gaits. This study reports the results of controlled experiments with habitually unshod Daasanach adults from Ileret to examine the relationships between stride length and speed, and also those between footprint size, body mass, and stature. Based on significant relationships among these variables, we estimate travel speeds ranging between 0.45 m/s and 2.2 m/s from the fossil hominin footprint trails at Ileret. The fossil footprints of seven individuals show evidence of heavy (mean = 50.0 kg; range: 41.5-60.3 kg) and tall individuals (mean = 169.5 cm; range: 152.6-185.8 cm), suggesting that these prints were most likely made by Homo erectus and/or male Paranthropus boisei. The large sizes of these footprints provide strong evidence that hominin body size increased during the early Pleistocene. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ambulatory estimation of foot placement during walking using inertial sensors
Schepers, H. Martin; van Asseldonk, Edwin H.F.; Baten, Christian T.M.; Veltink, Petrus H.
This study proposes a method to assess foot placement during walking using an ambulatory measurement system consisting of orthopaedic sandals equipped with force/moment sensors and inertial sensors (accelerometers and gyroscopes). Two parameters, lateral foot placement (LFP) and stride length (SL),
Heemstra, F.J.; Heemstra, F.J.
1993-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and...
Transverse Spectral Velocity Estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2014-01-01
array probe is used along with two different estimators based on the correlation of the received signal. They can estimate the velocity spectrum as a function of time as for ordinary spectrograms, but they also work at a beam-to-flow angle of 90°. The approach is validated using simulations of pulsatile...... flow using the Womersly–Evans flow model. The relative bias of the mean estimated frequency is 13.6% and the mean relative standard deviation is 14.3% at 90°, where a traditional estimator yields zero velocity. Measurements have been conducted with an experimental scanner and a convex array transducer....... A pump generated artificial femoral and carotid artery flow in the phantom. The estimated spectra degrade when the angle is different from 90°, but are usable down to 60° to 70°. Below this angle the traditional spectrum is best and should be used. The conventional approach can automatically be corrected...
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
Trawsfynydd Plutonium Estimate
International Nuclear Information System (INIS)
Reid, Bruce D.; Gerlach, David C.; Heasler, Patrick G.; Livingston, J.
2009-01-01
Report serves to document an estimate of the cumulative plutonium production of the Trawsfynydd Unit II reactor (Traws II) over its operating life made using the Graphite Isotope Ratio Method (GIRM). The estimate of the plutonium production in Traws II provided in this report has been generated under blind conditions. In other words, the estimate ofthe Traws II plutonium production has been generated without the knowledge of the plutonium production declared by the reactor operator (Nuclear Electric). The objective of this report is to demonstrate that the GIRM can be employed to serve as an accurate tool to verify weapons materials production declarations.
Multidimensional kernel estimation
Milosevic, Vukasin
2015-01-01
Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.
APLIKASI SPLINE ESTIMATOR TERBOBOT
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2001-01-01
Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2, ,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2, ,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
Thermodynamic estimation: Ionic materials
International Nuclear Information System (INIS)
Glasser, Leslie
2013-01-01
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy
Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo
2017-11-01
Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Foundations of estimation theory
Kubacek, L
1988-01-01
The application of estimation theory renders the processing of experimental results both rational and effective, and thus helps not only to make our knowledge more precise but to determine the measure of its reliability. As a consequence, estimation theory is indispensable in the analysis of the measuring processes and of experiments in general.The knowledge necessary for studying this book encompasses the disciplines of probability and mathematical statistics as studied in the third or fourth year at university. For readers interested in applications, comparatively detailed chapters
On Stein's unbiased risk estimate for reduced rank estimators
DEFF Research Database (Denmark)
Hansen, Niels Richard
2018-01-01
Stein's unbiased risk estimate (SURE) is considered for matrix valued observables with low rank means. It is shown that SURE is applicable to a class of spectral function estimators including the reduced rank estimator.......Stein's unbiased risk estimate (SURE) is considered for matrix valued observables with low rank means. It is shown that SURE is applicable to a class of spectral function estimators including the reduced rank estimator....
Gerhard K. Raile
1982-01-01
Equations are presented that predict diameter inside and outside bark for tree boles below 4.5 feet given d.b.h. These equations are modified and integrated to estimate stump volume. Parameters are presented for 22 Lake States species groups.
African Journals Online (AJOL)
a radiation. •. In exposure. Biological dose estimation involving low-dose. S. JANSEN, G. J. VAN HUYSSTEEN. Summary. Blood specimens were collected from 8 people 18 days after they had been accidentally exposed to a 947,2 GBq iridium-. 192 source during industrial application. The equivalent whole-body dose ...
McDonald, Judith A.; Thornton, Robert J.
2011-01-01
Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…
Simulating grain size estimation
Czech Academy of Sciences Publication Activity Database
Saxl, Ivan; Sülleiová, K.; Ponížil, P.
2001-01-01
Roč. 39, č. 6 (2001), s. 396-409 ISSN 0023-432X R&D Projects: GA ČR GA201/99/0269 Keywords : grain size estimation% ASTM standards%Voronoi tessellations Subject RIV: BE - Theoretical Physics Impact factor: 0.343, year: 2001
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
Numerical Estimation in Preschoolers
Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco
2010-01-01
Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...
Burke, Gary; Nesheiwat, Jeffrey; Su, Ling
1994-01-01
Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2017-01-01
Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707
Fast fundamental frequency estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2017-01-01
Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...
Quantifying IT estimation risks
Kulk, G.P.; Peters, R.J.; Verhoef, C.
2009-01-01
A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be
Estimation of morbidity effects
International Nuclear Information System (INIS)
Ostro, B.
1994-01-01
Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10
Asymptotic Optimality of Estimating Function Estimator for CHARN Model
Directory of Open Access Journals (Sweden)
Tomoyuki Amano
2012-01-01
Full Text Available CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares (CL estimator. However, recently, it was shown that the optimal estimating function estimator (G estimator is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.
Directory of Open Access Journals (Sweden)
A. Christian
1999-01-01
Full Text Available Speeds of walking dinosaurs that left fossil trackways have been estimated using the stride length times natural pendulum frequency of the limbs. In a detailed analysis of limb movements in walking Asian elephants and giraffes, however, distinct differences between actual limb movements and the predicted limb movements using only gravity as driving force were observed. Additionally, stride frequency was highly variable. Swing time was fairly constant, but especially at high walking speeds, much shorter than half the natural pendulum period. An analysis of hip and shoulder movements during walking showed that limb swinging was influenced by accelerations of hip and shoulder joints especially at high walking speeds. These results suggest an economical fast walking mechanism that could have been utilised by large dinosaurs to increase maximum speeds of locomotion. These findings throw new light on the dynamics of large vertebrates and can be used to improve speed estimates in large dinosaurs. Geschwindigkeiten gehender Dinosaurier, die fossile Fährten hinterlassen haben, wurden als Produkt aus Schrittlänge und natürlicher Pendelfrequenz der Beine abgeschätzt. Eine detaillierte Analyse der Beinbewegungen von gehenden Asiatischen Elefanten und Giraffen offenbarte allerdings klare Unterschiede zwischen den tatsächlichen Extremitätenbewegungen und den Bewegungen, die zu erwarten wären, wenn die Gravitation die einzige treibende Kraft darstellte. Zudem erwies sich die Schrittfrequenz als hochgradig variabel. Die Schwingzeit der Gliedmaßen war recht konstant, aber besonders bei hohen Gehgeschwindigkeiten viel kürzer als die halbe natürliche Pendelperiode der Extremitäten. Eine Analyse der Bewegungen der Hüft- und Schultergelenke während des Gehens zeigte, daß das Schwingen der Gliedmaßen durch Beschleunigungen dieser Gelenke beeinflußt wurde, insbesondere bei hohen Gehgeschwindigkeiten. Die Resultate legen einen ökonomischen Mechanismus
Vamos¸, C˘alin
2013-01-01
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
PESTO: Parameter EStimation TOolbox.
Stapor, Paul; Weindl, Daniel; Ballnus, Benjamin; Hug, Sabine; Loos, Carolin; Fiedler, Anna; Krause, Sabrina; Hroß, Sabrina; Fröhlich, Fabian; Hasenauer, Jan; Wren, Jonathan
2018-02-15
PESTO is a widely applicable and highly customizable toolbox for parameter estimation in MathWorks MATLAB. It offers scalable algorithms for optimization, uncertainty and identifiability analysis, which work in a very generic manner, treating the objective function as a black box. Hence, PESTO can be used for any parameter estimation problem, for which the user can provide a deterministic objective function in MATLAB. PESTO is a MATLAB toolbox, freely available under the BSD license. The source code, along with extensive documentation and example code, can be downloaded from https://github.com/ICB-DCM/PESTO/. jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
, marital status and presence of any co-morbidity other than dementia. Models with a log-transformed dependent variable, where predicted health care costs were re-transformed to the unlogged original scale by multiplying the exponential of the expected response on the log-scale with the average...... a regression model based on the quality of its predictions. In exploring the econometric issues, the objective of this study was to estimate a cost function in order to estimate the annual health care cost of dementia. Using different models, health care costs were regressed on the degree of dementia, sex, age...... of the exponentiated residuals, were part of the considered models. The root mean square error (RMSE), the mean absolute error (MAE) and the Theil U-statistic criteria were used to assess which model best predicted the health care cost. Large values on each criterion indicate that the model performs poorly. Based...
Distribution load estimation (DLE)
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A.; Lehtonen, M. [VTT Energy, Espoo (Finland)
1998-08-01
The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented
Estimating Venezuelas Latent Inflation
Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo
2011-01-01
Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
Estimating Spectra from Photometry
Kalmbach, J. Bryce; Connolly, Andrew J.
2017-12-01
Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.
Injury Risk Estimation Expertise
Petushek, Erich J.; Ward, Paul; Cokely, Edward T.; Myer, Gregory D.
2015-01-01
Background: Simple observational assessment of movement is a potentially low-cost method for anterior cruciate ligament (ACL) injury screening and prevention. Although many individuals utilize some form of observational assessment of movement, there are currently no substantial data on group skill differences in observational screening of ACL injury risk. Purpose/Hypothesis: The purpose of this study was to compare various groups’ abilities to visually assess ACL injury risk as well as the associated strategies and ACL knowledge levels. The hypothesis was that sports medicine professionals would perform better than coaches and exercise science academics/students and that these subgroups would all perform better than parents and other general population members. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A total of 428 individuals, including physicians, physical therapists, athletic trainers, strength and conditioning coaches, exercise science researchers/students, athletes, parents, and members of the general public participated in the study. Participants completed the ACL Injury Risk Estimation Quiz (ACL-IQ) and answered questions related to assessment strategy and ACL knowledge. Results: Strength and conditioning coaches, athletic trainers, physical therapists, and exercise science students exhibited consistently superior ACL injury risk estimation ability (+2 SD) as compared with sport coaches, parents of athletes, and members of the general public. The performance of a substantial number of individuals in the exercise sciences/sports medicines (approximately 40%) was similar to or exceeded clinical instrument-based biomechanical assessment methods (eg, ACL nomogram). Parents, sport coaches, and the general public had lower ACL-IQ, likely due to their lower ACL knowledge and to rating the importance of knee/thigh motion lower and weight and jump height higher. Conclusion: Substantial cross-professional/group differences in visual ACL
Best estimate containment analysis
International Nuclear Information System (INIS)
Smith, L.C.; Gresham, J.A.
1993-01-01
Primary reactor coolant system pipe ruptures are postulated as part of the design basis for containment integrity and equipment qualification validation for nuclear power plants. Current licensing analysis uses bounding conditions and assumptions, outside the range of actual operation, to determine a conservative measure of the performance requirements. Although this method has been adequate in the past, it does often involve the inclusion of excessive conservatism. A new licensing approach is under development that considers the performance of realistic analysis which quantifies the true plant and response. A licensing limit is then quantified above the realistic requirements by applying the appropriate plant data and methodology uncertainties. This best estimate approach allows a true measure of the conservative margin, above the plant performance requirements, to be quantified. By utilizing a portion of this margin, the operation, surveillance and maintenance burden can be reduced by transferring the theoretical margin inherent in the licensing analysis to real margin applied at the plant. Relaxation of surveillance and maintenance intervals, relaxation of diesel loading and containment cooling requirements, increased quantity of necessary equipment allowed to be out of service, and allowances for equipment allowed to be out of service, and allowances for equipment degradation are all potential benefits of applying this approach. Significant margins exist in current calculations due to the bounding nature of the evaluations. Scoping studies, which help quantify the potential margin available through best estimate mass and energy release analysis, demonstrate this. Also discussed in this paper is the approach for best estimate loss-of-coolant accident mass and energy release and containment analysis, the computer programs, the projected benefits, and the expected future directions
Mixtures Estimation and Applications
Mengersen, Kerrie; Titterington, Mike
2011-01-01
This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject
Estimations of actual availability
International Nuclear Information System (INIS)
Molan, M.; Molan, G.
2001-01-01
Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)
Earthquake Loss Estimation Uncertainties
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Aleksander
2013-04-01
The paper addresses the reliability issues of strong earthquakes loss assessment following strong earthquakes with worldwide Systems' application in emergency mode. Timely and correct action just after an event can result in significant benefits in saving lives. In this case the information about possible damage and expected number of casualties is very critical for taking decision about search, rescue operations and offering humanitarian assistance. Such rough information may be provided by, first of all, global systems, in emergency mode. The experience of earthquakes disasters in different earthquake-prone countries shows that the officials who are in charge of emergency response at national and international levels are often lacking prompt and reliable information on the disaster scope. Uncertainties on the parameters used in the estimation process are numerous and large: knowledge about physical phenomena and uncertainties on the parameters used to describe them; global adequacy of modeling techniques to the actual physical phenomena; actual distribution of population at risk at the very time of the shaking (with respect to immediate threat: buildings or the like); knowledge about the source of shaking, etc. Needless to be a sharp specialist to understand, for example, that the way a given building responds to a given shaking obeys mechanical laws which are poorly known (if not out of the reach of engineers for a large portion of the building stock); if a carefully engineered modern building is approximately predictable, this is far not the case for older buildings which make up the bulk of inhabited buildings. The way population, inside the buildings at the time of shaking, is affected by the physical damage caused to the buildings is not precisely known, by far. The paper analyzes the influence of uncertainties in strong event parameters determination by Alert Seismological Surveys, of simulation models used at all stages from, estimating shaking intensity
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
DEFF Research Database (Denmark)
Knudsen, Torben
2014-01-01
the results using full-scale wind turbine data. The previously developed methods were based on extended Kalman filtering. This method has several drawback compared to unscented Kalman filtering which has therefore been developed. The unscented Kalman filter was first tested on linear and non-linear test cases......Dynamic inflow is an effect which is normally not included in the models used for wind turbine control design. Therefore, potential improvement from including this effect exists. The objective in this project is to improve the methods previously developed for this and especially to verify...... which was successful. Then the estimation of a wind turbine state including dynamic inflow was tested on a simulated NREL 5MW turbine was performed. This worked perfectly with wind speeds from low to nominal wind speed as the output prediction errors where white. In high wind where the pitch actuator...
Robust Wave Resource Estimation
DEFF Research Database (Denmark)
Lavelle, John; Kofoed, Jens Peter
2013-01-01
An assessment of the wave energy resource at the location of the Danish Wave Energy test Centre (DanWEC) is presented in this paper. The Wave Energy Converter (WEC) test centre is located at Hanstholm in the of North West Denmark. Information about the long term wave statistics of the resource...... is necessary for WEC developers, both to optimise the WEC for the site, and to estimate its average yearly power production using a power matrix. The wave height and wave period sea states parameters are commonly characterized with a bivariate histogram. This paper presents bivariate histograms and kernel....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0...
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
Flexible and efficient estimating equations for variogram estimation
Sun, Ying
2018-01-11
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...
Estimating the diversity of dinosaurs
Wang, Steve C.; Dodson, Peter
2006-01-01
Despite current interest in estimating the diversity of fossil and extant groups, little effort has been devoted to estimating the diversity of dinosaurs. Here we estimate the diversity of nonavian dinosaurs at ≈1,850 genera, including those that remain to be discovered. With 527 genera currently described, at least 71% of dinosaur genera thus remain unknown. Although known diversity declined in the last stage of the Cretaceous, estimated diversity was steady, suggesting that dinosaurs as a w...
A new estimator for vector velocity estimation [medical ultrasonics
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation....... The new estimator automatically compensates for the axial velocity when determining the transverse velocity. The estimation is optimized by using a lag different from one in the estimation process, and noise artifacts are reduced by averaging RF samples. Further, compensation for the axial velocity can...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...
An improved estimation and focusing scheme for vector velocity estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1999-01-01
The full blood velocity vector must be estimated in medical ultrasound to give a correct depiction of the blood flow. This can be done by introducing a transversely oscillating pulse-echo ultrasound field, which makes the received signal influenced by a transverse motion. Such an approach...... was suggested in [1]. Here the conventional autocorrelation approach was used for estimating the transverse velocity and a compensation for the axial motion was necessary in the estimation procedure. This paper introduces a new estimator for determining the two-dimensional velocity vector and a new dynamic...... beamforming method. A modified autocorrelation approach employing fourth order moments of the input data is used for velocity estimation. The new estimator calculates the axial and lateral velocity component independently of each other. The estimation is optimized for differences in axial and lateral...
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Mapped Plot Patch Size Estimates
Paul C. Van Deusen
2005-01-01
This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...
Uveal melanoma: estimating prognosis.
Kaliki, Swathi; Shields, Carol L; Shields, Jerry A
2015-02-01
Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Uveal melanoma: Estimating prognosis
Directory of Open Access Journals (Sweden)
Swathi Kaliki
2015-01-01
Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Estimation of soil permeability
Directory of Open Access Journals (Sweden)
Amr F. Elhakim
2016-09-01
Full Text Available Soils are permeable materials because of the existence of interconnected voids that allow the flow of fluids when a difference in energy head exists. A good knowledge of soil permeability is needed for estimating the quantity of seepage under dams and dewatering to facilitate underground construction. Soil permeability, also termed hydraulic conductivity, is measured using several methods that include constant and falling head laboratory tests on intact or reconstituted specimens. Alternatively, permeability may be measured in the field using insitu borehole permeability testing (e.g. [2], and field pumping tests. A less attractive method is to empirically deduce the coefficient of permeability from the results of simple laboratory tests such as the grain size distribution. Otherwise, soil permeability has been assessed from the cone/piezocone penetration tests (e.g. [13,14]. In this paper, the coefficient of permeability was measured using field falling head at different depths. Furthermore, the field coefficient of permeability was measured using pumping tests at the same site. The measured permeability values are compared to the values empirically deduced from the cone penetration test for the same location. Likewise, the coefficients of permeability are empirically obtained using correlations based on the index soil properties of the tested sand for comparison with the measured values.
Approaches to estimating decommissioning costs
International Nuclear Information System (INIS)
Smith, R.I.
1990-07-01
The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs
Direct volume estimation without segmentation
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In many scenarios, a periodic signal of interest is often contaminated by different types of noise that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch of such sig......In many scenarios, a periodic signal of interest is often contaminated by different types of noise that may render many existing pitch estimation methods suboptimal, e.g., due to an incorrect white Gaussian noise assumption. In this paper, a method is established to estimate the pitch...... against different noise situations. The simulation results confirm that the proposed MVDR method outperforms the state-of-the-art weighted least squares (WLS) pitch estimator in colored noise and has robust pitch estimates against missing harmonics in some time-frames....
A priori SNR estimation and noise estimation for speech enhancement
Yao, Rui; Zeng, ZeQing; Zhu, Ping
2016-12-01
A priori signal-to-noise ratio (SNR) estimation and noise estimation are important for speech enhancement. In this paper, a novel modified decision-directed (DD) a priori SNR estimation approach based on single-frequency entropy, named DDBSE, is proposed. DDBSE replaces the fixed weighting factor in the DD approach with an adaptive one calculated according to change of single-frequency entropy. Simultaneously, a new noise power estimation approach based on unbiased minimum mean square error (MMSE) and voice activity detection (VAD), named UMVAD, is proposed. UMVAD adopts different strategies to estimate noise in order to reduce over-estimation and under-estimation of noise. UMVAD improves the classical statistical model-based VAD by utilizing an adaptive threshold to replace the original fixed one and modifies the unbiased MMSE-based noise estimation approach using an adaptive a priori speech presence probability calculated by entropy instead of the original fixed one. Experimental results show that DDBSE can provide greater noise suppression than DD and UMVAD can improve the accuracy of noise estimation. Compared to existing approaches, speech enhancement based on UMVAD and DDBSE can obtain a better segment SNR score and composite measure c ovl score, especially in adverse environments such as non-stationary noise and low-SNR.
DEFF Research Database (Denmark)
Larsen, Sara Tangmose; Thevissen, Patrick; Lynnerup, Niels
2015-01-01
A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... unbiased age estimates which minimize the risk of wrongly estimating minors as adults. Furthermore, when corrected ad hoc, TA produces appropriate prediction intervals. As TA allows expansion with additional traits, i.e. stages of development of the left hand-wrist and the clavicle, it has a great...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...
WAYS HIERARCHY OF ACCOUNTING ESTIMATES
Directory of Open Access Journals (Sweden)
ŞERBAN CLAUDIU VALENTIN
2015-03-01
Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.
International Nuclear Information System (INIS)
Vetrinskaya, N.I.; Manasbayeva, A.B.
1998-01-01
Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in
Estimated Blood Loss in Craniotomy
Sitohang, Diana; AM, Rachmawati; Arif, Mansyur
2016-01-01
Introduction: Estimated blood loss is an estimation of how much blood is loss during surgery. Surgical procedure requires a preparation of blood stock, but the demand for blood often larger than the actual blood used. This predicament happens because there is no blood requirement protocol being used. This study aims to determine the estimated blood loss during craniotomy procedure and it's conformity to blood units ordered for craniotomy procedure. Methods: This study is a retrospective study...
Parameter estimation in plasmonic QED
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Methods of statistical model estimation
Hilbe, Joseph
2013-01-01
Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th
Error estimation for pattern recognition
Braga Neto, U
2015-01-01
This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems
Time estimation predicts mathematical intelligence.
Directory of Open Access Journals (Sweden)
Peter Kramer
Full Text Available BACKGROUND: Performing mental subtractions affects time (duration estimates, and making time estimates disrupts mental subtractions. This interaction has been attributed to the concurrent involvement of time estimation and arithmetic with general intelligence and working memory. Given the extant evidence of a relationship between time and number, here we test the stronger hypothesis that time estimation correlates specifically with mathematical intelligence, and not with general intelligence or working-memory capacity. METHODOLOGY/PRINCIPAL FINDINGS: Participants performed a (prospective time estimation experiment, completed several subtests of the WAIS intelligence test, and self-rated their mathematical skill. For five different durations, we found that time estimation correlated with both arithmetic ability and self-rated mathematical skill. Controlling for non-mathematical intelligence (including working memory capacity did not change the results. Conversely, correlations between time estimation and non-mathematical intelligence either were nonsignificant, or disappeared after controlling for mathematical intelligence. CONCLUSIONS/SIGNIFICANCE: We conclude that time estimation specifically predicts mathematical intelligence. On the basis of the relevant literature, we furthermore conclude that the relationship between time estimation and mathematical intelligence is likely due to a common reliance on spatial ability.
Thomas, Hoben
1981-01-01
Psychophysicists neglect to consider how error should be characterized in applications of the power law. Failures of the power law to agree with certain theoretical predictions are examined. A power law with lognormal product structure is proposed and approximately unbiased parameter estimates given for several common estimation situations.…
Density estimation from local structure
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2009-11-01
Full Text Available The authors propose a hyper-ellipsoid clustering algorithm that grows clusters from local structures in a dataset and estimates the underlying geometrical structure of data with a set of hyper-ellipsoids. The clusters are used to estimate a Gaussian...
Optimization of Barron density estimates
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; van der Meulen, E. C.
2001-01-01
Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001
Estimating Bottleneck Bandwidth using TCP
Allman, Mark
1998-01-01
Various issues associated with estimating bottleneck bandwidth using TCP are presented in viewgraph form. Specific topics include: 1) Why TCP is wanted to estimate the bottleneck bandwidth; 2) Setting ssthresh to an appropriate value to reduce loss; 3) Possible packet-pair solutions; and 4) Preliminary results: ACTS and the Internet.
Multisensor estimation: New distributed algorithms
Directory of Open Access Journals (Sweden)
Plataniotis K. N.
1997-01-01
Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.
EFFECTIVE TOOL WEAR ESTIMATION THROUGH ...
African Journals Online (AJOL)
Though a number of researchers have used MLP for fusing the information and estimating the tool status, enough literatures are not available to state the influence of neural network parameters which may affect the accurate estimation of the tool status. This point also has been emphasized in [4], where the authors stated ...
CONDITIONS FOR EXACT CAVALIERI ESTIMATION
Directory of Open Access Journals (Sweden)
Mónica Tinajero-Bravo
2014-03-01
Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.
Age estimation in competitive sports.
Timme, Maximilian; Steinacker, Jürgen Michael; Schmeling, Andreas
2017-01-01
To maintain the principle of sporting fairness and to protect the health of athletes, it is essential that age limits for youth sporting competitions are complied with. Forensic scientists have developed validated procedures for age estimation in living individuals. Methods have also been published for age estimation in competitive sports. These methods make use of the ossification stage of an epiphyseal plate to draw conclusions about an athlete's age. This article presents published work on the use of magnetic resonance imaging for age estimation in competitive sports. In addition, it looks at the effect on age estimation of factors such as an athlete's socioeconomic status, the use of hormones and anabolic substances as well as chronic overuse of the growth plates. Finally, recommendations on the components required for a valid age estimation procedure in competitive sports are suggested.
Laser cost experience and estimation
International Nuclear Information System (INIS)
Shofner, F.M.; Hoglund, R.L.
1977-01-01
This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design
UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY
Directory of Open Access Journals (Sweden)
Jean-Paul Jernot
2011-05-01
Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
SDR Input Power Estimation Algorithms
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Book review: Mineral resource estimation
Mihalasky, Mark J.
2016-01-01
Mineral Resource Estimation is about estimating mineral resources at the scale of an ore deposit and is not to be mistaken with mineral resource assessment, which is undertaken at a significantly broader scale, even if similar data and geospatial/geostatistical methods are used. The book describes geological, statistical, and geostatistical tools and methodologies used in resource estimation and modeling, and presents case studies for illustration. The target audience is the expert, which includes professional mining geologists and engineers, as well as graduate-level and advanced undergraduate students.
Phase estimation in optical interferometry
Rastogi, Pramod
2014-01-01
Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...
LPS Catch and Effort Estimation
National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
50th Percentile Rent Estimates
Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...
An Analytical Cost Estimation Procedure
National Research Council Canada - National Science Library
Jayachandran, Toke
1999-01-01
Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...
Estimating Emissions from Railway Traffic
DEFF Research Database (Denmark)
Jørgensen, Morten W.; Sorenson, Spencer C.
1998-01-01
Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources illustr...
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
The problem of estimating frequency response functions and extracting modal parameters is the topic of this paper. A new method based on the Random Decrement technique combined with Fourier transformation and the traditional pure Fourier transformation based approach is compared with regard...... to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... to these measurements the estimation time of the frequency response functions can be compared. The modal parameters estimated by the methods are compared. It is expected that the Random Decrement technique is faster than the traditional method based on pure Fourier Transformations. This is due to the fact...
Travel time estimation using Bluetooth.
2015-06-01
The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...
Estimating emissions from railway traffic
Energy Technology Data Exchange (ETDEWEB)
Joergensen, M.W.; Sorenson, C.
1997-07-01
The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)
The Psychology of Cost Estimating
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
Estimating uncertainty in resolution tests
CSIR Research Space (South Africa)
Goncalves, DP
2006-05-01
Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...
Estimating COCOM Natural Background Dormancy
2015-04-01
ER D C/ CR RE L TR -1 5- 7 Phase IV Army Camouflage Development Effort Estimating COCOM Natural Background Dormancy Co ld R eg io ns...ERDC/CRREL TR-15-7 April 2015 Estimating COCOM Natural Background Dormancy Alexis L. Coplin and Charles C. Ryerson Cold Regions Research and...phenological stage, controls color and tex- ture of natural vegetation as it cycles through greenup, verdancy, senes- cence, and dormancy . For the Army
Estimates of Green potentials. Applications
International Nuclear Information System (INIS)
Danchenko, V I
2003-01-01
Optimal Cartan-type covers by hyperbolic discs of carriers of Green α-potentials are obtained in a simply connected domain in the complex plane and estimates of the potentials outside the carriers are presented. These results are applied to problems on the separation of singularities of analytic and harmonic functions. For instance, uniform and integral estimates in terms of Green capacities of components of meromorphic functions are obtained
Hyperparameter estimation in image restoration
Energy Technology Data Exchange (ETDEWEB)
Kiwata, Hirohito [Division of Natural Science, Osaka Kyoiku University, Kashiwara, Osaka 582-8582 (Japan)], E-mail: kiwata@cc.osaka-kyoiku.ac.jp
2008-08-22
The hyperparameter in image restoration by the Bayes formula is an important quantity. This communication shows a physical method for the estimation of the hyperparameter without approximation. For artificially generated images by prior probability, the hyperparameter is computed accurately. For practical images, accuracy of the estimated hyperparameter depends on the magnetization and energy of the images. We discuss the validity of prior probability for an original image. (fast track communication)
Hyperparameter estimation in image restoration
International Nuclear Information System (INIS)
Kiwata, Hirohito
2008-01-01
The hyperparameter in image restoration by the Bayes formula is an important quantity. This communication shows a physical method for the estimation of the hyperparameter without approximation. For artificially generated images by prior probability, the hyperparameter is computed accurately. For practical images, accuracy of the estimated hyperparameter depends on the magnetization and energy of the images. We discuss the validity of prior probability for an original image. (fast track communication)
Learning the MMSE Channel Estimator
Neumann, David; Wiese, Thomas; Utschick, Wolfgang
2017-01-01
We present a method for estimating conditionally Gaussian random vectors with random covariance matrices, which uses techniques from the field of machine learning. Such models are typical in communication systems, where the covariance matrix of the channel vector depends on random parameters, e.g., angles of propagation paths. If the covariance matrices exhibit certain Toeplitz and shift-invariance structures, the complexity of the MMSE channel estimator can be reduced to O(M log M) floating ...
Using convolutional neural networks to estimate time-of-flight from PET detector waveforms
Berg, Eric; Cherry, Simon R.
2018-01-01
Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional
Using convolutional neural networks to estimate time-of-flight from PET detector waveforms.
Berg, Eric; Cherry, Simon R
2018-01-11
Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s -1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional
Parameter estimation in food science.
Dolan, Kirk D; Mishra, Dharmendra K
2013-01-01
Modeling includes two distinct parts, the forward problem and the inverse problem. The forward problem-computing y(t) given known parameters-has received much attention, especially with the explosion of commercial simulation software. What is rarely made clear is that the forward results can be no better than the accuracy of the parameters. Therefore, the inverse problem-estimation of parameters given measured y(t)-is at least as important as the forward problem. However, in the food science literature there has been little attention paid to the accuracy of parameters. The purpose of this article is to summarize the state of the art of parameter estimation in food science, to review some of the common food science models used for parameter estimation (for microbial inactivation and growth, thermal properties, and kinetics), and to suggest a generic method to standardize parameter estimation, thereby making research results more useful. Scaled sensitivity coefficients are introduced and shown to be important in parameter identifiability. Sequential estimation and optimal experimental design are also reviewed as powerful parameter estimation methods that are beginning to be used in the food science literature.
Deriving confidence in paleointensity estimates
Paterson, Greig A.; Heslop, David; Muxworthy, Adrian R.
2010-07-01
Determining the strength of the ancient geomagnetic field (paleointensity) can be time consuming and can result in high data rejection rates. The current paleointensity database is therefore dominated by studies that contain only a small number of paleomagnetic samples (n). It is desirable to estimate how many samples are required to obtain a reliable estimate of the true paleointensity and the uncertainty associated with that estimate. Assuming that real paleointensity data are normally distributed, an assumption adopted by most workers when they employ the arithmetic mean and standard deviation to characterize their data, we can use distribution theory to address this question. Our calculations indicate that if we wish to have 95% confidence that an estimated mean falls within a ±10% interval about the true mean, as many as 24 paleomagnetic samples are required. This is an unfeasibly high number for typical paleointensity studies. Given that most paleointensity studies have small n, this requires that we have adequately defined confidence intervals around estimated means. We demonstrate that the estimated standard deviation is a poor method for defining confidence intervals for n levels, within-site consistency criteria must be depend on n. Defining such a criterion using the 95% confidence level results in the rejection of ˜56% of all currently available paleointensity data entries.
Weldon Spring historical dose estimate
Energy Technology Data Exchange (ETDEWEB)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.
Weldon Spring historical dose estimate
International Nuclear Information System (INIS)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr
Grazing capacity estimates: why include biomass estimates from ...
African Journals Online (AJOL)
Forage for ruminants in the dry season were assessed and matched with feed requirements in three villages in Zimbabwe, namely; Chiweshe, Makande and Mudzimu. Stocking rates were compared with grazing capacity to determine grazing intensities. Grazing capacities were estimated with and without crop residues to ...
Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach
Stevenson, Glenn A.
2012-01-01
For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…
Estimation and valuation in accounting
Directory of Open Access Journals (Sweden)
Cicilia Ionescu
2014-03-01
Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.
Integral Criticality Estimators in MCATK
Energy Technology Data Exchange (ETDEWEB)
Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory
2016-06-14
The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.
Moving Horizon Estimation and Control
DEFF Research Database (Denmark)
Jørgensen, John Bagterp
successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
Estimation of potential uranium resources
International Nuclear Information System (INIS)
Curry, D.L.
1977-09-01
Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates
ATR Performance Estimation Seed Program
2015-09-28
for this collec ion of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...to produce simulated MCM sonar data and demonstrate the impact of system, environmental, and target scattering effects on ATR detection...settings and achieving better understanding the relative impact of the factors influencing ATR performance. sonar , mine countermeasures, MCM, automatic
Depth estimation via stage classification
Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.
2008-01-01
We identify scene categorization as the first step towards efficient and robust depth estimation from single images. Categorizing the scene into one of the geometric classes greatly reduces the possibilities in subsequent phases. To that end, we introduce 15 typical 3D scene geometries, called
Equating accelerometer estimates among youth
DEFF Research Database (Denmark)
Brazendale, Keith; Beets, Michael W; Bornstein, Daniel B
2016-01-01
OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across...
An Improved Cluster Richness Estimator
Energy Technology Data Exchange (ETDEWEB)
Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.
2009-08-03
Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.
Estimating Swedish biomass energy supply
International Nuclear Information System (INIS)
Johansson, J.; Lundqvist, U.
1999-01-01
Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy
Reliable Function Approximation and Estimation
2016-08-16
progress , quarterly, research, special, group study, etc. 3. DATES COVERED. Indicate the time during which the work was performed and the report...compressive sensing with norm estimation. K Knudson, R Saab, and R Ward. IEEE Transactions on Information Theory 62 (5), 2016. 2748-2758. (O2) An arithmetic
Estimation of Motion Vector Fields
DEFF Research Database (Denmark)
Larsen, Rasmus
1993-01-01
This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...
Correlation Dimension Estimation for Classification
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2006-01-01
Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Estimating pure component vapor pressures
Energy Technology Data Exchange (ETDEWEB)
Myrdal, P.B.; Yalkowsky, S.H. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Pharmaceutical Sciences
1994-12-31
The hazard of exposure to volatile organic compounds is an increasing concern all chemical industries. Recent EPA and FDA guidelines require the environmental assessment of new chemical entities, which includes the determination or estimation of vapor pressure. This work presents a means for the reliable estimation of pure component vapor pressures. The method presented is an extension of the work proposed by Mishra and Yalkowsky. New equations are presented for the heat capacity change upon boiling and the entropy of boiling. The final vapor pressure equation requires only the knowledge of transition temperatures and molecular structure. The equation developed has been successfully applied to 296 organic compounds, giving an overall average absolute error of 0.12 log units (in atm. at 25 C). In addition, the equation is shown to be very accurate in predicting vapor pressure as a function of temperature. However, for many compounds of environmental interest, the boiling point is not known or cannot be determined due to decomposition. In light of this, a new technique has been developed which can be used to estimate ``hypothetical`` boiling points. This enables the estimation of vapor pressure from as little as one transition temperature and molecular structure.
Nonparametric estimation of ultrasound pulses
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Leeman, Sidney
1994-01-01
An algorithm for nonparametric estimation of 1D ultrasound pulses in echo sequences from human tissues is derived. The technique is a variation of the homomorphic filtering technique using the real cepstrum, and the underlying basis of the method is explained. The algorithm exploits a priori...
Multispacecraft current estimates at swarm
DEFF Research Database (Denmark)
Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.
2015-01-01
During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...
Runtime Verification with State Estimation
Stoller, Scott D.; Bartocci, Ezio; Seyster, Justin; Grosu, Radu; Havelund, Klaus; Smolka, Scott A.; Zadok, Erez
2011-01-01
We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate theprobability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to "fill in" sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.
Fuel Estimation Using Dynamic Response
National Research Council Canada - National Science Library
Hines, Michael S
2007-01-01
...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...
Collider Scaling and Cost Estimation
International Nuclear Information System (INIS)
Palmer, R.B.
1986-01-01
This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs
Illuminance Flow Estimation by Regression
Karlsson, S.M.; Pont, S.C.; Koenderink, J.J.; Zisserman, A.
2010-01-01
We investigate the estimation of illuminance flow using Histograms of Oriented Gradient features (HOGs). In a regression setting, we found for both ridge regression and support vector machines, that the optimal solution shows close resemblance to the gradient based structure tensor (also known as
Estimation of Bridge Reliability Distributions
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper...
Fast computation of distance estimators
Directory of Open Access Journals (Sweden)
Lagergren Jens
2007-03-01
Full Text Available Abstract Background Some distance methods are among the most commonly used methods for reconstructing phylogenetic trees from sequence data. The input to a distance method is a distance matrix, containing estimated pairwise distances between all pairs of taxa. Distance methods themselves are often fast, e.g., the famous and popular Neighbor Joining (NJ algorithm reconstructs a phylogeny of n taxa in time O(n3. Unfortunately, the fastest practical algorithms known for Computing the distance matrix, from n sequences of length l, takes time proportional to l·n2. Since the sequence length typically is much larger than the number of taxa, the distance estimation is the bottleneck in phylogeny reconstruction. This bottleneck is especially apparent in reconstruction of large phylogenies or in applications where many trees have to be reconstructed, e.g., bootstrapping and genome wide applications. Results We give an advanced algorithm for Computing the number of mutational events between DNA sequences which is significantly faster than both Phylip and Paup. Moreover, we give a new method for estimating pairwise distances between sequences which contain ambiguity Symbols. This new method is shown to be more accurate as well as faster than earlier methods. Conclusion Our novel algorithm for Computing distance estimators provides a valuable tool in phylogeny reconstruction. Since the running time of our distance estimation algorithm is comparable to that of most distance methods, the previous bottleneck is removed. All distance methods, such as NJ, require a distance matrix as input and, hence, our novel algorithm significantly improves the overall running time of all distance methods. In particular, we show for real world biological applications how the running time of phylogeny reconstruction using NJ is improved from a matter of hours to a matter of seconds.
Techniques for estimating allometric equations.
Manaster, B J; Manaster, S
1975-11-01
Morphologists have long been aware that differential size relationships of variables can be fo great value when studying shape. Allometric patterns have been the basis of many interpretations of adaptations, biomechanisms, and taxonomies. It is of importance that the parameters of the allometric equation be as accurate estimates as possible since they are so commonly used in such interpretations. Since the error term may come into the allometric relation either exponentially or additively, there are at least two methods of estimating the parameters of the allometric equation. That most commonly used assumes exponentiality of the error term, and operates by forming a linear function by a logarithmic transformation and then solving by the method of ordinary least squares. On the other hand, if the rrror term comes into the equation in an additive way, a nonlinear method may be used, searching the parameter space for those parameters which minimize the sum of squared residuals. Study of data on body weight and metabolism in birds explores the issues involved in discriminating between the two models by working through a specific example and shows that these two methods of estimation can yield highly different results. Not only minimizing the sum of squared residuals, but also the distribution and randomness of the residuals must be considered in determing which model more precisely estimates the parameters. In general there is no a priori way to tell which model will be best. Given the importance often attached to the parameter estimates, it may be well worth considerable effort to find which method of solution is appropriate for a given set of data.
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with
Directory of Open Access Journals (Sweden)
Yoon Jungwon
2012-08-01
Full Text Available Abstract Background Virtual reality (VR technology along with treadmill training (TT can effectively provide goal-oriented practice and promote improved motor learning in patients with neurological disorders. Moreover, the VR + TT scheme may enhance cognitive engagement for more effective gait rehabilitation and greater transfer to over ground walking. For this purpose, we developed an individualized treadmill controller with a novel speed estimation scheme using swing foot velocity, which can enable user-driven treadmill walking (UDW to more closely simulate over ground walking (OGW during treadmill training. OGW involves a cyclic acceleration-deceleration profile of pelvic velocity that contrasts with typical treadmill-driven walking (TDW, which constrains a person to walk at a preset constant speed. In this study, we investigated the effects of the proposed speed adaptation controller by analyzing the gait kinematics of UDW and TDW, which were compared to those of OGW at three pre-determined velocities. Methods Ten healthy subjects were asked to walk in each mode (TDW, UDW, and OGW at three pre-determined speeds (0.5 m/s, 1.0 m/s, and 1.5 m/s with real time feedback provided through visual displays. Temporal-spatial gait data and 3D pelvic kinematics were analyzed and comparisons were made between UDW on a treadmill, TDW, and OGW. Results The observed step length, cadence, and walk ratio defined as the ratio of stride length to cadence were not significantly different between UDW and TDW. Additionally, the average magnitude of pelvic acceleration peak values along the anterior-posterior direction for each step and the associated standard deviations (variability were not significantly different between the two modalities. The differences between OGW and UDW and TDW were mainly in swing time and cadence, as have been reported previously. Also, step lengths between OGW and TDW were different for 0.5 m/s and 1.5 m/s gait velocities
Comments on mutagenesis risk estimation
International Nuclear Information System (INIS)
Russell, W.L.
1976-01-01
Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Estimating Foreign Exchange Reserve Adequacy
Directory of Open Access Journals (Sweden)
Abdul Hakim
2013-04-01
Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.
Organ volume estimation using SPECT
Zaidi, H
1996-01-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...
Estimation of unitary quantum operations
International Nuclear Information System (INIS)
Ballester, Manuel A.
2004-01-01
The problem of optimally estimating an unknown unitary quantum operation with the aid of entanglement is addressed. The idea is to prepare an entangled pair, apply the unknown unitary to one of the two parts, and then measure the joint output state. This measurement could be an entangled one or it could be separable (e.g., measurements which can be implemented with local operations and classical communication or LOCC). A comparison is made between these possibilities and it is shown that by using nonseparable measurements one can improve the accuracy of the estimation by a factor of 2(d+1)/d where d is the dimension of the Hilbert space on which U acts
Cost Estimates and Investment Decisions
International Nuclear Information System (INIS)
Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter
2001-08-01
When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)
Prior information in structure estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka
2003-01-01
Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf
Multiset Estimates and Combinatorial Synthesis
Levin, Mark Sh.
2012-01-01
The paper addresses an approach to ordinal assessment of alternatives based on assignment of elements into an ordinal scale. Basic versions of the assessment problems are formulated while taking into account the number of levels at a basic ordinal scale [1,2,...,l] and the number of assigned elements (e.g., 1,2,3). The obtained estimates are multisets (or bags) (cardinality of the multiset equals a constant). Scale-posets for the examined assessment problems are presented. 'Interval multiset ...
Invariant Bayesian estimation on manifolds
Jermyn, Ian H.
2005-01-01
A frequent and well-founded criticism of the maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimates of a continuous parameter \\gamma taking values in a differentiable manifold \\Gamma is that they are not invariant to arbitrary ``reparameterizations'' of \\Gamma. This paper clarifies the issues surrounding this problem, by pointing out the difference between coordinate invariance, which is a sine qua non for a mathematically well-defined problem, and diffeomorphism invarianc...
On Bayesian Estimation in Manifolds
Jermyn, Ian
2002-01-01
It is frequently stated that the maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimates of a continuous parameter are not invariant to arbitrary «reparametrizations» of the parameter space . This report clarifies the issues surrounding this problem, by pointing out the difference between coordinate invariance, which is a sine qua non for a mathematically well-defined problem, and diffeomorphism invariance, which is a substantial issue, and provides a solution. We first sho...
Position Estimation Using Image Derivative
Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato
2015-01-01
This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.
Architecture-Centric Project Estimation
Henry, Troy Steven
2007-01-01
In recent years studies have been conducted which suggest that taking an architecture first approach to managing large software projects can reduce a significant amount of the uncertainty present in project estimates. As the project progresses, more concrete information is known about the planned system and less risk is present. However, the rate at which risk is alleviated varies across the life-cycle. Research suggests that there exists a significant drop off in risk...
Toward Realistic Acquisition Schedule Estimates
2016-04-30
through, for example, Program Evaluation and Review Technique, Critical Path Method and Gantt Charts (Blanchard & Fabrycky, 2006, esp. Chap. 11...million lines in the F-15E. Lines of code in the F-35 vary with source and date. A 2014 CRS report estimates F- 35 software as containing...set of tasks (e.g., PERT, Gantt Charts) 4 SCHEDULE EST. RELATIONSHIPS Some Candidate Explanatory Variables • Risk Reduction Efforts • Contract
Reactor core performance estimating device
International Nuclear Information System (INIS)
Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.
1995-01-01
The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)
Xu, Ru-Gang; Koga, Dennis (Technical Monitor)
2001-01-01
The goal of 'Estimate' is to take advantage of attitude information to produce better pose while staying flexible and robust. Currently there are several instruments that are used for attitude: gyros, inclinometers, and compasses. However, precise and useful attitude information cannot come from one instrument. Integration of rotational rates, from gyro data for example, would result in drift. Therefore, although gyros are accurate in the short-term, accuracy in the long term is unlikely. Using absolute instruments such as compasses and inclinometers can result in an accurate measurement of attitude in the long term. However, in the short term, the physical nature of compasses and inclinometers, and the dynamic nature of a mobile platform result in highly volatile and therefore useless data. The solution then is to use both absolute and relative data. Kalman Filtering is known to be able to combine gyro and compass/inclinometer data to produce stable and accurate attitude information. Since the model of motion is linear and the data comes in as discrete samples, a Discrete Kalman Filter was selected as the core of the new estimator. Therefore, 'Estimate' can be divided into two parts: the Discrete Kalman Filter and the code framework.
Surface Estimation for Microwave Imaging
Directory of Open Access Journals (Sweden)
Douglas Kurrant
2017-07-01
Full Text Available Biomedical imaging and sensing applications in many scenarios demand accurate surface estimation from a sparse set of noisy measurements. These measurements may arise from a variety of sensing modalities, including laser or electromagnetic samples of an object’s surface. We describe a state-of-the-art microwave imaging prototype that has sensors to acquire both microwave and laser measurements. The approach developed to translate sparse samples of the breast surface into an accurate estimate of the region of interest is detailed. To evaluate the efficacy of the method, laser and electromagnetic samples are acquired by sensors from three realistic breast models with varying sizes and shapes. A set of metrics is developed to assist with the investigation and demonstrate that the algorithm is able to accurately estimate the shape of a realistic breast phantom when only a sparse set of data are available. Moreover, the algorithm is robust to the presence of measurement noise, and is effective when applied to measurement scans of patients acquired with the prototype.
Surface Estimation for Microwave Imaging.
Kurrant, Douglas; Bourqui, Jeremie; Fear, Elise
2017-07-19
Biomedical imaging and sensing applications in many scenarios demand accurate surface estimation from a sparse set of noisy measurements. These measurements may arise from a variety of sensing modalities, including laser or electromagnetic samples of an object's surface. We describe a state-of-the-art microwave imaging prototype that has sensors to acquire both microwave and laser measurements. The approach developed to translate sparse samples of the breast surface into an accurate estimate of the region of interest is detailed. To evaluate the efficacy of the method, laser and electromagnetic samples are acquired by sensors from three realistic breast models with varying sizes and shapes. A set of metrics is developed to assist with the investigation and demonstrate that the algorithm is able to accurately estimate the shape of a realistic breast phantom when only a sparse set of data are available. Moreover, the algorithm is robust to the presence of measurement noise, and is effective when applied to measurement scans of patients acquired with the prototype.
Estimating the coherence of noise
Wallman, Joel
To harness the advantages of quantum information processing, quantum systems have to be controlled to within some maximum threshold error. Certifying whether the error is below the threshold is possible by performing full quantum process tomography, however, quantum process tomography is inefficient in the number of qubits and is sensitive to state-preparation and measurement errors (SPAM). Randomized benchmarking has been developed as an efficient method for estimating the average infidelity of noise to the identity. However, the worst-case error, as quantified by the diamond distance from the identity, can be more relevant to determining whether an experimental implementation is at the threshold for fault-tolerant quantum computation. The best possible bound on the worst-case error (without further assumptions on the noise) scales as the square root of the infidelity and can be orders of magnitude greater than the reported average error. We define a new quantification of the coherence of a general noise channel, the unitarity, and show that it can be estimated using an efficient protocol that is robust to SPAM. Furthermore, we also show how the unitarity can be used with the infidelity obtained from randomized benchmarking to obtain improved estimates of the diamond distance and to efficiently determine whether experimental noise is close to stochastic Pauli noise.
Palila abundance estimates and trend
Camp, Richad; Banko, Paul C.
2012-01-01
The Palila (Loxioides bailleui) is an endangered, seed-eating, finch-billed honeycreeper found only on Hawai`i Island. Once occurring on the islands of Kaua`i and O`ahu and Mauna Loa and Hualālai volcanoes of Hawai`i, Palila are now found only in subalpine, dry-forest habitats on Mauna Kea (Banko et al. 2002). Previous analyses showed that Palila numbers fluctuated throughout the 1980s and 1990s but declined rapidly and steadily since 2003 (Jacobi et al. 1996, Leonard et al. 2008, Banko et al. 2009, Gorresen et al. 2009, Banko et al. in press). The aim of this report is to update abundance estimates for the Palila based on the 2012 surveys. We assess Palila trends over two periods: 1) the long-term trend during 1998–2012 and 2) the short-term trajectory between 2003 and 2012. The first period evaluates the population trend for the entire time series since additional transects were established (Johnson et al. 2006). These additional transects were established to produce a more precise population estimate and provide more complete coverage of the Palila range. The initial year for short-term trajectory was chosen subjectively to coincide with the recent decline in the Palila population. Additionally, stations in the core Palila habitat were surveyed on two occasions in 2012, thus allowing us to address the question of how repeat samples improve estimate precision.
Learning headway estimation in driving.
Taieb-Maimon, Meirav
2007-08-01
The main purpose of the present study was to examine to what extent the ability to attain a required headway of 1 or 2 s can be improved through practical driving instruction under real traffic conditions and whether the learning is sustained after a period during which there has been no controlled training. The failure of drivers to estimate headways correctly has been demonstrated in previous studies. Two methods of training were used: time based (in seconds) and distance based (in a combination of meters and car lengths). For each method, learning curves were examined for 18 participants at speeds of 50, 80, and 100 km/hr. The results indicated that drivers were weak in estimating headway prior to training using both methods. The learning process was rapid for both methods and similar for all speeds; thus, after one trial with feedback, there was already a significant improvement. The learning was retained over time, for at least the 1 month examined in this study. Both the time and distance training of headway improved drivers' ability to attain required headways, with the learning being maintained over a retention interval. The learning process was based on perceptual cues from the driving scene and feedback from the experimenter, regardless of the formal training method. The implications of these results are that all drivers should be trained in headway estimation using an objective distance measuring device, which can be installed on driver instruction vehicles.
Statistical estimation of process holdup
International Nuclear Information System (INIS)
Harris, S.P.
1988-01-01
Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs
Pose estimation with correspondences determination
Dong, Hang; Sun, Changku; Zhu, Ruizhe; Wang, Peng
2018-01-01
Pose estimation by monocular is finding the pose of the object by a single image of feature points on the object, which must meet the requirements of detecting all the feature points and matching them in the image. But it will be difficult to obtain the correct pose if part of the feature points are occluded when the object moving a large scale. We proposed a method for finding the pose on the condition that the correspondences between the object points and the image points are unknown. The method combines two algorithms: one algorithm is SoftAssign, which constructs a weight matrix of feature points and image points, and determines the correspondences by iteration loop processing; the other algorithm is OI(orthogonal iteration), which derives an iterative algorithm which directly computes orthogonal and globally convergent rotation matrices.We nest the two algorithms into one iteration loop.An appropriate pose will be chosen from a set of reference poses as the initial pose of object at the beginning of the loop, then we process the weight matrix to confirm the correspondences and calculate the optimal solution of rotation matrices alternately until the object space collinearity error is less than the threshold, each estimation will be closer to the truth pose than the last one through every iteration loop. Experimentally, the method proved to be efficient and have a high precision pose estimation of 3D object with large-scale motion.
Contact Estimation in Robot Interaction
Directory of Open Access Journals (Sweden)
Filippo D'Ippolito
2014-07-01
Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.
Population entropies estimates of proteins
Low, Wai Yee
2017-05-01
The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.
Abundance estimation and conservation biology
Nichols, J.D.; MacKenzie, D.I.
2004-01-01
Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention
Abundance estimation and Conservation Biology
Directory of Open Access Journals (Sweden)
Nichols, J. D.
2004-06-01
Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that
Parameter Estimation Using VLA Data
Venter, Willem C.
The main objective of this dissertation is to extract parameters from multiple wavelength images, on a pixel-to-pixel basis, when the images are corrupted with noise and a point spread function. The data used are from the field of radio astronomy. The very large array (VLA) at Socorro in New Mexico was used to observe planetary nebula NGC 7027 at three different wavelengths, 2 cm, 6 cm and 20 cm. A temperature model, describing the temperature variation in the nebula as a function of optical depth, is postulated. Mathematical expressions for the brightness distribution (flux density) of the nebula, at the three observed wavelengths, are obtained. Using these three equations and the three data values available, one from the observed flux density map at each wavelength, it is possible to solve for two temperature parameters and one optical depth parameter at each pixel location. Due to the fact that the number of unknowns equal the number of equations available, estimation theory cannot be used to smooth any noise present in the data values. It was found that a direct solution of the three highly nonlinear flux density equations is very sensitive to noise in the data. Results obtained from solving for the three unknown parameters directly, as discussed above, were not physical realizable. This was partly due to the effect of incomplete sampling at the time when the data were gathered and to noise in the system. The application of rigorous digital parameter estimation techniques result in estimated parameters that are also not physically realizable. The estimated values for the temperature parameters are for example either too high or negative, which is not physically possible. Simulation studies have shown that a "double smoothing" technique improves the results by a large margin. This technique consists of two parts: in the first part the original observed data are smoothed using a running window and in the second part a similar smoothing of the estimated parameters
Improving exposure estimates by combining exposure information.
Neitzel, Richard L; Daniell, William E; Sheppard, Lianne; Davies, Hugh W; Seixas, Noah S
2011-06-01
Any exposure estimation technique has inherent strengths and limitations. In an effort to improve exposure estimates, this study developed and evaluated the performance of several hybrid exposure estimates created by combining information from individual assessment techniques. Construction workers (n = 68) each completed three full-shift noise measurements over 4 months. Three single exposure assessment techniques [trade mean (TM), task-based (TB), and subjective rating (SR)] were used to estimate exposures for each subject. Hybrid techniques were then developed which incorporated the TM, SR, and TB noise exposure estimates via arithmetic mean combination, linear regression combination, and modification of TM and TB estimates using SR information. Exposure estimates from the single and hybrid techniques were compared to subjects' measured exposures to evaluate accuracy. Hybrid estimates generally were more accurate than estimates from single techniques. The best-performing hybrid techniques combined TB and SR estimates and resulted in improvements in estimated exposures compared to single techniques. Hybrid estimates were not improved by the inclusion of TM information in this study. Hybrid noise exposure estimates performed better than individual estimates, and in this study, combination of TB and SR estimates using linear regression performed best. The application of hybrid approaches in other contexts will depend upon the exposure of interest and the nature of the individual exposure estimates available.
On semiautomatic estimation of surface area
DEFF Research Database (Denmark)
Dvorak, J.; Jensen, Eva B. Vedel
2013-01-01
. For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator......In this paper, we propose a semiautomatic procedure for estimation of particle surface area. It uses automatic segmentation of the boundaries of the particle sections and applies different estimators depending on whether the segmentation was judged by a supervising expert to be satisfactory....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area...
Nonparametric e-Mixture Estimation.
Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru
2016-12-01
This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.
Thermodynamics and life span estimation
International Nuclear Information System (INIS)
Kuddusi, Lütfullah
2015-01-01
In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span
Estimation of the energy needs
International Nuclear Information System (INIS)
Ailleret
1955-01-01
The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr
Dose estimation by biological methods
International Nuclear Information System (INIS)
Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.
1997-01-01
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
Location Estimation of Mobile Devices
Directory of Open Access Journals (Sweden)
Kamil ŽIDEK
2009-06-01
Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.
Stochastic estimation of electricity consumption
International Nuclear Information System (INIS)
Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
Properties of Estimated Characteristic Roots
DEFF Research Database (Denmark)
Nielsen, Bent; Nielsen, Heino Bohn
Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear...... when multiple roots are present as this imply a non-differentiability so the d-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples this has a considerable influence on the finite sample distribution unless the roots are far apart...
State Estimation for Humanoid Robots
2015-07-01
tactical grade IMU mounted on the pelvis of the robot . It provides pelvis orientation and angular velocity, which are used in the forward kinematics ...depends on sensors available to the robot . We use sensed joint posi- tion with forward kinematics to compute CoM position. The root variables are estimated...when the robot is shifting its supporting foot. Fig. 6.4 is a plot of the CoM velocity from the Kalman filter, and from forward kinemat - ics. The CoM
Switching Activity Estimation of CIC Filter Integrators
Abbas, Muhammad; Gustafsson, Oscar
2010-01-01
In this work, a method for estimation of the switching activity in integrators is presented. To achieve low power, it is always necessary to develop accurate and efficient methods to estimate the switching activity. The switching activities are then used to estimate the power consumption. In our work, the switching activity is first estimated for the general purpose integrators and then it is extended for the estimation of switching activity in cascaded integrators in CIC filters. ©2010 I...
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Note on demographic estimates 1979.
1979-01-01
Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.
CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE
Directory of Open Access Journals (Sweden)
Nino Serdarevic
2012-10-01
Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.
Graph Sampling for Covariance Estimation
Chepuri, Sundeep Prabhakar
2017-04-25
In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Velocity estimation using synthetic aperture imaging
DEFF Research Database (Denmark)
Nikolov, Svetoslav; Jensen, Jørgen Arendt
2001-01-01
In a previous paper we have demonstrated that the velocity can be estimated for a plug flow using recursive ultrasound imaging [1]. The approach involved the estimation of the velocity at every emission and using the estimates for motion compensation. An error in the estimates, however, would lead...... to an error in the compensation further increasing the error in the estimates. In this paper the approach is further developed such that no motion compensation is necessary. In recursive ultrasound imaging a new high resolution image is created after every emission. The velocity was estimated by cross...... and significantly improves the velocity estimates. The approach is verified using simulations with the program Field II and measurements on a blood-mimicking phantom. The estimates from the simulations have a bias of -3.5% and a mean standard deviation less than 2.0% for a parabolic velocity profile. The estimates...
Estimation of Tobit Type Censored Demand Systems
DEFF Research Database (Denmark)
Barslund, Mikkel Christoffer
to the asymptotically more efficient Simulated ML (SML) estimator in the context of a censored Almost Ideal demand system. Further, a simpler QML estimator based on the sum of univariate Tobit models is introduced. A Monte Carlo simulation comparing the three estimators is performed on three different sample sizes....... The QML estimators perform well in the presence of moderate sized error correlation coefficients often found in empirical studies. With absolute larger correlation coefficients, the SML estimator is found to be superior. The paper lends support to the general use of the QML estimators and points towards...... Recently a number of authors have suggested to estimate censored demand systems as a system of Tobit multivariate equations employing a Quasi Maximum Likelihood (QML) estimator based on bivariate Tobit models. In this paper I study the efficiency of this QML estimator relative...
Supplemental report on cost estimates'
International Nuclear Information System (INIS)
1992-01-01
The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis
Neutron background estimates in GESA
Directory of Open Access Journals (Sweden)
Fernandes A.C.
2014-01-01
Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii...... and spatial scales. At full-scale wastewater treatment plants (WWTPs),mechanistic modelling using the ASM framework and concept (e.g. Henze et al., 2000) has become an important part of the engineering toolbox for process engineers. It supports plant design, operation, optimization and control applications......). Models have also been used as an integral part of the comprehensive analysis and interpretation of data obtained from a range of experimental methods from the laboratory, as well as pilot-scale studies to characterise and study wastewater treatment plants. In this regard, models help to properly explain...
Power Spectrum Estimation. I. Basics
Hamilton, A. J. S.
This chapter and its companion form an extended version of notes provided to participants in the Valencia September 2004 summer school on Data Analysis in Cosmology. The lectures offer a pedagogical introduction to the problem of estimating the power spectrum from galaxy surveys. The intention is to focus on concepts rather than on technical detail, but enough mathematics is provided to point the student in the right direction. This first lecture presents background material. It collects some essential definitions, discusses traditional methods for measuring power, notably the Feldman-Kaiser-Peacock [2] method, and introduces Bayesian analysis, Fisher matrices, and maximum likelihood. For pedagogy and brevity, several derivations are set as exercises for the reader. At the summer school, multiple choice questions, included herein, were used to convey some didactic ideas, and provoked a little lively debate.
2007 Estimated International Energy Flows
Energy Technology Data Exchange (ETDEWEB)
Smith, C A; Belles, R D; Simon, A J
2011-03-10
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
Location Estimation using Delayed Measurements
DEFF Research Database (Denmark)
Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus
1998-01-01
When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported......When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new...
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Velocity Estimation in Medical Ultrasound [Life Sciences
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Villagómez Hoyos, Carlos Armando; Holbek, Simon
2017-01-01
This article describes the application of signal processing in medical ultrasound velocity estimation. Special emphasis is on the relation among acquisition methods, signal processing, and estimators employed. The description spans from current clinical systems for one-and two-dimensional (1-D...... and 2-D) velocity estimation to the experimental systems for three-dimensional (3-D) estimation and advanced imaging sequences, which can yield thousands of images or volumes per second with fully quantitative flow estimates. Here, spherical and plane wave emissions are employed to insonify the whole...... region of interest, and full images are reconstructed after each pulse emission for use in velocity estimation....
Estimation of Poverty in Small Areas
Directory of Open Access Journals (Sweden)
Agne Bikauskaite
2014-12-01
Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.
Runoff estimation in residencial area
Directory of Open Access Journals (Sweden)
Meire Regina de Almeida Siqueira
2013-12-01
Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.
Replacement of a vessel head, an operation which today gets easily into its stride
International Nuclear Information System (INIS)
Mardon, P.; Chaumont, J.C.; Lambiotte, P.
1995-01-01
In 1992, one year after the detection of a leak in a vessel head of the Electricite de France (EDF) Bugey 4 reactor, the head was replaced by the Framatome-Jeumont Industrie Group. Today, this group, which has developed new methods and new tools to optimize the cost, the time-delay and the dosimetry of this kind of intervention, has performed 11 additional replacements, two of which on 1300 MWe power units. This paper describes step by step the successive operations required for a complete vessel head replacement, including the testing of safety systems before starting up the reactor. (J.S.). 7 photos
Chinn, Nancy Resendes
2010-01-01
The purpose of this mixed method study was to compare current practices of athletic trainers in the management of concussion in football at California Community Colleges (CCC) with the concussion management guidelines set forth by the National Athletic Trainers Association (NATA). The study also set out to gain understanding of why some athletic…
Striding networks of inter-process communication based on TCP/IP protocol
International Nuclear Information System (INIS)
Lou Yi; Chen Haichun; Qian Jing; Chen Zhuomin
2004-01-01
A mode of process/thread communication between QNX and WINDOWS operating systems in isomerous computers is described. It is proved in practice to be an entirely feasible mode with high efficiency and reliability. A socket created by Socket API is used to communicate between two operating systems. (authors)
Gugliellmelli, Eugenio; Micera, Silvestro; Migliavacca, Francesco; Pedotti, Antonio
2015-01-01
In Italy, biomechanics research and the analysis of human and animal movement have had a very long history, beginning with the exceptional pioneering work of Leonardo da Vinci. In 1489, da Vinci began investigating human anatomy, including an examination of human tendons, muscles, and the skeletal system. He continued this line of inquiry later in life, identifying what he called "the four powers--movement, weight, force, and percussion"--and how he thought they worked in the human body. His approach, by the way, was very modern--analyzing nature through anatomy, developing models for interpretation, and transferring this knowledge to bio-inspired machines.
OneSAF as an In-Stride Mission Command Asset
2014-06-01
behaviors. These behaviors can be fully automated through additional software development or by chaining existing behaviors. This remains a gap...additional software development or by chaining existing behaviors. This remains a gap area as there has been relatively little effort to create...Stimulation,” OneSAF Co- Developer Technical Exchange Meeting 2012, Sep 2012. [5] M. McCall , B. Murray: “IEEE 1278 Distributed Interactive Simulation
Sen, Suman
DNA, RNA and Protein are three pivotal biomolecules in human and other organisms, playing decisive roles in functionality, appearance, diseases development and other physiological phenomena. Hence, sequencing of these biomolecules acquires the prime interest in the scientific community. Single molecular identification of their building blocks can be done by a technique called Recognition Tunneling (RT) based on Scanning Tunneling Microscope (STM). A single layer of specially designed recognition molecule is attached to the STM electrodes, which trap the targeted molecules (DNA nucleoside monophosphates, RNA nucleoside monophosphates or amino acids) inside the STM nanogap. Depending on their different binding interactions with the recognition molecules, the analyte molecules generate stochastic signal trains accommodating their "electronic fingerprints". Signal features are used to detect the molecules using a machine learning algorithm and different molecules can be identified with significantly high accuracy. This, in turn, paves the way for rapid, economical nanopore sequencing platform, overcoming the drawbacks of Next Generation Sequencing (NGS) techniques. To read DNA nucleotides with high accuracy in an STM tunnel junction a series of nitrogen-based heterocycles were designed and examined to check their capabilities to interact with naturally occurring DNA nucleotides by hydrogen bonding in the tunnel junction. These recognition molecules are Benzimidazole, Imidazole, Triazole and Pyrrole. Benzimidazole proved to be best among them showing DNA nucleotide classification accuracy close to 99%. Also, Imidazole reader can read an abasic monophosphate (AP), a product from depurination or depyrimidination that occurs 10,000 times per human cell per day. In another study, I have investigated a new universal reader, 1-(2-mercaptoethyl)pyrene (Pyrene reader) based on stacking interactions, which should be more specific to the canonical DNA nucleosides. In addition, Pyrene reader showed higher DNA base-calling accuracy compare to Imidazole reader, the workhorse in our previous projects. In my other projects, various amino acids and RNA nucleoside monophosphates were also classified with significantly high accuracy using RT. Twenty naturally occurring amino acids and various RNA nucleosides (four canonical and two modified) were successfully identified. Thus, we envision nanopore sequencing biomolecules using Recognition Tunneling (RT) that should provide comprehensive betterment over current technologies in terms of time, chemical and instrumental cost and capability of de novo sequencing.
2013-10-03
... for making sure that your comment does not include any sensitive personal information, like anyone's....10(a)(2). In particular, do not include competitively sensitive information such as costs, sales... compete in those markets in the future, and that competition is expected to reduce prices for consumers...
International Nuclear Information System (INIS)
Ahmed, Y.A.; Jaoji, A.A.; Olalekan, Y.S.
2010-01-01
Seed, stem and leaves samples of Marijuana (Cannabis sativa) popularly called Indian Hemp available in northern Nigeria were analyzed for trace amounts of Mg, Al, Ca, Ti, Mn, Na, Br, La, Yb, Cr, Fe, Zn, and Ba using Instrumental Neutron Activation Analysis. Sample sizes of roughly 300mg irradiated for five minutes (short irradiation) and six hours (long irradiation), with decay times of 7 minutes, 10,000 minutes and 26,000 minutes for short, medium and long-lived nuclides respectively. Counting times for ten minutes (short-lived nuclides), 1,800 minutes (medium-lived nuclides) and 36,000 minutes (long-lived nuclides) yielded detection limits between 0.05 - 0.09μg/g. For comparative study, refined tobacco produced by a tobacco company operating in northern Nigeria were characterized together with the marijuana-which is usually smoked raw with leaves stem and seed packed together. The results obtained shows that both the refined tobacco and the raw marijuana have high c oncentration of Ca, Mg, Al and Mn and low values of Na, Br and La. However, marijuana was found to have heavy elements in abundance compared to the refined tobacco, with Zn = 20.5 μg/g and Cr = 14.3μg/g recording the highest values among the heavy elements detected. This is a sharp difference between the two since the values of heavy elements obtained for the refined tobacco are even below detection limits. Quality Control and Quality Assurance was tested using certified reference material obtained from NIST (Tomato Leaves).
Current strides in AAV-derived vectors and SIN channels further ...
African Journals Online (AJOL)
A.S. Odiba
Vectors used in Gene Therapy Clinical Trials. Data Sourced from: The Journal of Gene. Medicine (www.wiley.co.uk/genmed/clinical) on 2nd June 2017. Vector. Gene Therapy. Clinical Trials. Number. %. Alphavirus (VEE) Replicon Vaccine. 1. 0. E. coli. 2. 0.1. Bifidobacterium longum. 1. 0. CRISPR-Cas9. 7. 0.3. Adenovirus + ...
Striding Out With Parkinson Disease: Evidence-Based Physical Therapy for Gait Disorders
Martin, Clarissa L.; Schenkman, Margaret L.
2010-01-01
Although Parkinson disease (PD) is common throughout the world, the evidence for physical therapy interventions that enable long-term improvement in walking is still emerging. This article critiques the major physical therapy approaches related to gait rehabilitation in people with PD: compensatory strategies, motor skill learning, management of secondary sequelae, and education to optimize physical activity and reduce falls. The emphasis of this review is on gait specifically, although balance and falls are of direct importance to gait and are addressed in that context. Although the researchers who have provided the evidence for these approaches grounded their studies on different theoretical paradigms, each approach is argued to have a valid place in the comprehensive management of PD generally and of gait in particular. The optimal mix of interventions for each individual varies according to the stage of disease progression and the patient's preferred form of exercise, capacity for learning, and age. PMID:20022998
Sun Grant Initiative : great strides toward a sustainable and more energy-independent future
2014-09-01
The Sun Grant Initiative publication, developed by the U.S. Department of Transportation, offers a glimpse of how the Sun Grant Initiative Centers are advancing alternative fuels research. Transportation plays a significant role in biofuels research,...
Marine microbiology: A glimpse of the strides in the Indian and the global arena
Digital Repository Service at National Institute of Oceanography (India)
LokaBharathi, P.A.; Nair, S.; Chandramohan, D.
to understand the form and function of bacteria that are responsible for mediating the various processes in the sea. The field evolves from culture based ecology to direct quantification of these in different marine niches. Insights into some...
China academics feel a sting scientists fear crackdown jeopardized research strides
Sanger, David E
1989-01-01
An international conference on HTS in China a failure after western speakers boycott the event and Chinese speakers forced to study speeches of the Chinese government leader instead of preparing papers (1 page).
Striding out with Parkinson disease: evidence-based physical therapy for gait disorders.
Morris, Meg E; Martin, Clarissa L; Schenkman, Margaret L
2010-02-01
Although Parkinson disease (PD) is common throughout the world, the evidence for physical therapy interventions that enable long-term improvement in walking is still emerging. This article critiques the major physical therapy approaches related to gait rehabilitation in people with PD: compensatory strategies, motor skill learning, management of secondary sequelae, and education to optimize physical activity and reduce falls. The emphasis of this review is on gait specifically, although balance and falls are of direct importance to gait and are addressed in that context. Although the researchers who have provided the evidence for these approaches grounded their studies on different theoretical paradigms, each approach is argued to have a valid place in the comprehensive management of PD generally and of gait in particular. The optimal mix of interventions for each individual varies according to the stage of disease progression and the patient's preferred form of exercise, capacity for learning, and age.
Estimation of capacities on Florida freeways.
2014-09-01
Current capacity estimates within Floridas travel time reliability tools rely on the Highway Capacity Manual (HCM 2010) to : estimate capacity under various conditions. Field measurements show that the capacities of Florida freeways are noticeably...
Access Based Cost Estimation for Beddown Analysis
National Research Council Canada - National Science Library
Pennington, Jasper E
2006-01-01
The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...
Estimating maternal genetic effects in livestock
Bijma, P.
2006-01-01
This study investigates the estimation of direct and maternal genetic (co)variances, accounting for environmental covariances between direct and maternal effects. Estimated genetic correlations between direct and maternal effects presented in the literature have often been strongly negative, and
Flight Mechanics/Estimation Theory Symposium, 1989
Stengle, Thomas (Editor)
1989-01-01
Numerous topics in flight mechanics and estimation were discussed. Satellite attitude control, quaternion estimation, orbit and attitude determination, spacecraft maneuvers, spacecraft navigation, gyroscope calibration, spacecraft rendevous, and atmospheric drag model calculations for spacecraft lifetime prediction are among the topics covered.
Estimating Inter-Deployment Training Cycle Performances
National Research Council Canada - National Science Library
Eriskin, Levent
2003-01-01
... (COMET) metrics. The objective was primarily to decide whether the COMET database can be used to estimate the performances of ships, and to build regression models to estimate Final Evaluation Problem (FEP...
Posture estimation system for underground mine vehicles
CSIR Research Space (South Africa)
Hlophe, K
2010-09-01
Full Text Available . The trilateration algorithm utilized is an Ordinary Least Square (OLS) estimator. The pose estimator has two ultrasonic receivers at a fixed separation distance. The two ultrasonic receivers each calculate its unique position. The orientation of the object...
On Estimation and Testing for Pareto Tails
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.
2013-01-01
Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research
Bandwidth Selection for Weighted Kernel Density Estimation
Wang, Bin; Wang, Xiaofeng
2007-01-01
In the this paper, the authors propose to estimate the density of a targeted population with a weighted kernel density estimator (wKDE) based on a weighted sample. Bandwidth selection for wKDE is discussed. Three mean integrated squared error based bandwidth estimators are introduced and their performance is illustrated via Monte Carlo simulation. The least-squares cross-validation method and the adaptive weight kernel density estimator are also studied. The authors also consider the boundary...
Estimating the NIH Efficient Frontier
2012-01-01
Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Carleman estimates for some elliptic systems
International Nuclear Information System (INIS)
Eller, M
2008-01-01
A Carleman estimate for a certain first order elliptic system is proved. The proof is elementary and does not rely on pseudo-differential calculus. This estimate is used to prove Carleman estimates for the isotropic Lame system as well as for the isotropic Maxwell system with C 1 coefficients
Cognitive Processes of Numerical Estimation in Children
Ashcraft, Mark H.; Moore, Alex M.
2012-01-01
We tested children in Grades 1 to 5, as well as college students, on a number line estimation task and examined latencies and errors to explore the cognitive processes involved in estimation. The developmental trends in estimation were more consistent with the hypothesized shift from logarithmic to linear representation than with an account based…
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
Estimating software development project size, using probabilistic ...
African Journals Online (AJOL)
Probabilistic approach was used to estimate the software project size, using the data collected when we developed a tool to estimate the time to complete a new software project, and from The University of Calabar Computer Centre. The Expected size of the Tool was estimated to be 1.463 KSLOC, but when the tool was ...
Development of Numerical Estimation in Young Children
Siegler, Robert S.; Booth, Julie L.
2004-01-01
Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…
Estimation of gradients from scattered data
Energy Technology Data Exchange (ETDEWEB)
Stead, S.E.
1984-01-01
Many techniques for producing a surface from scattered data require gradients at the data points. Since only positional data are usually known, the gradients must be estimated before the surface can be computed. The quality of the surface depends on the estimated gradients; so it is important to compute accurate estimates.
Sample Size Estimation: The Easy Way
Weller, Susan C.
2015-01-01
This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…
Online wave estimation using vessel motion measurements
DEFF Research Database (Denmark)
H. Brodtkorb, Astrid; Nielsen, Ulrik D.; J. Sørensen, Asgeir
2018-01-01
In this paper, a computationally efficient online sea state estimation algorithm isproposed for estimation of the on site sea state. The algorithm finds the wave spectrum estimate from motion measurements in heave, roll and pitch by iteratively solving a set of linear equations. The main vessel p...
Illuminant estimation in multispectral imaging.
Khan, Haris Ahmad; Thomas, Jean-Baptiste; Hardeberg, Jon Yngve; Laligant, Olivier
2017-07-01
With the advancement in sensor technology, the use of multispectral imaging is gaining wide popularity for computer vision applications. Multispectral imaging is used to achieve better discrimination between the radiance spectra, as compared to the color images. However, it is still sensitive to illumination changes. This study evaluates the potential evolution of illuminant estimation models from color to multispectral imaging. We first present a state of the art on computational color constancy and then extend a set of algorithms to use them in multispectral imaging. We investigate the influence of camera spectral sensitivities and the number of channels. Experiments are performed on simulations over hyperspectral data. The outcomes indicate that extension of computational color constancy algorithms from color to spectral gives promising results and may have the potential to lead towards efficient and stable representation across illuminants. However, this is highly dependent on spectral sensitivities and noise. We believe that the development of illuminant invariant multispectral imaging systems will be a key enabler for further use of this technology.
Global Warming Estimation from MSU
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Trojaniello, Diana; Ravaschio, Andrea; Hausdorff, Jeffrey M; Cereatti, Andrea
2015-09-01
The estimation of gait temporal parameters with inertial measurement units (IMU) is a research topic of interest in clinical gait analysis. Several methods, based on the use of a single IMU mounted at waist level, have been proposed for the estimate of these parameters showing satisfactory performance when applied to the gait of healthy subjects. However, the above mentioned methods were developed and validated on healthy subjects and their applicability in pathological gait conditions was not systematically explored. We tested the three best performing methods found in a previous comparative study on data acquired from 10 older adults, 10 hemiparetic, 10 Parkinson's disease and 10 Huntington's disease subjects. An instrumented gait mat was used as gold standard. When pathological populations were analyzed, missed or extra events were found for all methods and a global decrease of their performance was observed to different extents depending on the specific group analyzed. The results revealed that none of the tested methods outperformed the others in terms of accuracy of the gait parameters determination for all the populations except the Parkinson's disease subjects group for which one of the methods performed better than others. The hemiparetic subjects group was the most critical group to analyze (stride duration errors between 4-5 % and step duration errors between 8-13 % of the actual values across methods). Only one method provides estimates of the stance and swing durations which however should be interpreted with caution in pathological populations (stance duration errors between 6-14 %, swing duration errors between 10-32 % of the actual values across populations). Copyright © 2015 Elsevier B.V. All rights reserved.
Bayesian estimation and tracking a practical guide
Haug, Anton J
2012-01-01
A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation
Software Development Cost Estimation Executive Summary
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Indirect estimators in US federal programs
1996-01-01
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
Optimal estimations of random fields using kriging
International Nuclear Information System (INIS)
Barua, G.
2004-01-01
Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance
Budget estimates. Fiscal year 1998
International Nuclear Information System (INIS)
1997-02-01
The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC's mission, therefore, is to regulate the Nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC's FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC's Salaraies and Expenses appropriation for $476,500,000, and the other is NRC's Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC's Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC's Salaries and Expenses and NRC's Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury
Budget estimates. Fiscal year 1998
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-02-01
The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.
Weighted conditional least-squares estimation
International Nuclear Information System (INIS)
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
Atmospheric Turbulence Estimates from a Pulsed Lidar
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Discrete Choice Models - Estimation of Passenger Traffic
DEFF Research Database (Denmark)
Sørensen, Majken Vildrik
2003-01-01
model, data and estimation are described, with a focus of possibilities/limitations of different techniques. Two special issues of modelling are addressed in further detail, namely data segmentation and estimation of Mixed Logit models. Both issues are concerned with whether individuals can be assumed...... for estimation of choice models). For application of the method an algorithm is provided with a case. Also for the second issue, estimation of Mixed Logit models, a method was proposed. The most commonly used approach to estimate Mixed Logit models, is to employ the Maximum Simulated Likelihood estimation (MSL...... distribution of coefficients were found. All the shapes of distributions found, complied with sound knowledge in terms of which should be uni-modal, sign specific and/or skewed distributions....
Entropy estimates of small data sets
Energy Technology Data Exchange (ETDEWEB)
Bonachela, Juan A; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hinrichsen, Haye [Fakultaet fuer Physik und Astronomie, Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany)
2008-05-23
Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)
Relative Pose Estimation Algorithm with Gyroscope Sensor
Directory of Open Access Journals (Sweden)
Shanshan Wei
2016-01-01
Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.
Resilient Distributed Estimation Through Adversary Detection
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2018-05-01
This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.
Deconvolution estimation of mixture distributions with boundaries
Lee, Mihee; Hall, Peter; Shen, Haipeng; Marron, J. S.; Tolle, Jon; Burch, Christina
2013-01-01
In this paper, motivated by an important problem in evolutionary biology, we develop two sieve type estimators for distributions that are mixtures of a finite number of discrete atoms and continuous distributions under the framework of measurement error models. While there is a large literature on deconvolution problems, only two articles have previously addressed the problem taken up in our article, and they use relatively standard Fourier deconvolution. As a result the estimators suggested in those two articles are degraded seriously by boundary effects and negativity. A major contribution of our article is correct handling of boundary effects; our method is asymptotically unbiased at the boundaries, and also is guaranteed to be nonnegative. We use roughness penalization to improve the smoothness of the resulting estimator and reduce the estimation variance. We illustrate the performance of the proposed estimators via our real driving application in evolutionary biology and two simulation studies. Furthermore, we establish asymptotic properties of the proposed estimators. PMID:24009793
Notes on a New Coherence Estimator
Energy Technology Data Exchange (ETDEWEB)
Bickel, Douglas L.
2016-01-01
This document discusses some interesting features of the new coherence estimator in [1] . The estimator is d erived from a slightly different viewpoint. We discuss a few properties of the estimator, including presenting the probability density function of the denominator of the new estimator , which is a new feature of this estimator . Finally, we present an appr oximate equation for analysis of the sensitivity of the estimator to the knowledge of the noise value. ACKNOWLEDGEMENTS The preparation of this report is the result of an unfunded research and development activity. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.
Cosmochemical Estimates of Mantle Composition
Palme, H.; O'Neill, H. St. C.
2003-12-01
, and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush
Application of spreadsheet to estimate infiltration parameters
Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed
2016-01-01
Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...
Dynamic Diffusion Estimation in Exponential Family Models
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
2013-01-01
Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf
State energy data report 1994: Consumption estimates
International Nuclear Information System (INIS)
1996-10-01
This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included
State energy data report 1994: Consumption estimates
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-10-01
This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.
The Sharpe ratio of estimated efficient portfolios
Kourtis, Apostolos
2016-01-01
Investors often adopt mean-variance efficient portfolios for achieving superior risk-adjusted returns. However, such portfolios are sensitive to estimation errors, which affect portfolio performance. To understand the impact of estimation errors, I develop simple and intuitive formulas of the squared Sharpe ratio that investors should expect from estimated efficient portfolios. The new formulas show that the expected squared Sharpe ratio is a function of the length of the available data, the ...
Estimation of Modal Parameters and their Uncertainties
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune
1999-01-01
In this paper it is shown how to estimate the modal parameters as well as their uncertainties using the prediction error method of a dynamic system on the basis of uotput measurements only. The estimation scheme is assessed by means of a simulation study. As a part of the introduction, an example...... is given showing how the uncertainty estimates can be used in applications such as damage detection....
UAV State Estimation Modeling Techniques in AHRS
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Science yield estimation for AFTA coronagraphs
Traub, Wesley A.; Belikov, Ruslan; Guyon, Olivier; Kasdin, N. Jeremy; Krist, John; Macintosh, Bruce; Mennesson, Bertrand; Savransky, Dmitry; Shao, Michael; Serabyn, Eugene; Trauger, John
2014-08-01
We describe the algorithms and results of an estimation of the science yield for five candidate coronagraph designs for the WFIRST-AFTA space mission. The targets considered are of three types, known radial-velocity planets, expected but as yet undiscovered exoplanets, and debris disks, all around nearby stars. The results of the original estimation are given, as well as those from subsequently updated designs that take advantage of experience from the initial estimates.
Decentralized estimation and control for power systems
Singh, Abhinav Kumar
2014-01-01
This thesis presents a decentralized alternative to the centralized state-estimation and control technologies used in current power systems. Power systems span over vast geographical areas, and therefore require a robust and reliable communication network for centralized estimation and control. The supervisory control and data acquisition (SCADA) systems provide such a communication architecture and are currently employed for centralized estimation and control of power systems in a static ma...
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
Age Estimation for Dental Patients Using Orthopantomographs
Karaarslan, Bekir; Karaarslan, Emine Sirin; Ozsevik, Abdul Semih; Ertas, Ertan
2010-01-01
Objectives: The aim of this study was to conduct age estimates for dental patients using orthopantomographs (OPGs). The OPGs were selected by an independent author with respect to criteria and evaluated by two independent dentists. The results were compared to chronologic ages. The reliability of the estimates, concurrently made by the two independent dentists using OPGs, was also evaluated. Methods: In this retrospective study, the OPGs of 238 Turkish individuals of known chronological age, ranging from 1 to 60 years, were measured. Patients were then classified. Radiographs were evaluated by two independent dentists and age estimation was achieved according to the decades. Results: The truest age estimates made by the dentists were in the 1–10 years age range (89.6%), the most inaccurate age estimates were in the 41–50 years age range (41.7%). Results indicate that the accuracy of age estimation diminishes with age. Conclusions: Despite the variations related to the practitioners, in this study, there were no significant differences in age estimations between the two participant practitioners. Age estimation through evaluating OPGs was the most accurate in the first decade and the least in fourth decade. It can be concluded that OPGs are not adequate for accurate age estimation. PMID:20922158
Towards Greater Harmonisation of Decommissioning Cost Estimates
International Nuclear Information System (INIS)
O'Sullivan, Patrick; ); Laraia, Michele; ); LaGuardia, Thomas S.
2010-01-01
The NEA Decommissioning Cost Estimation Group (DCEG), in collaboration with the IAEA Waste Technology Section and the EC Directorate-General for Energy and Transport, has recently studied cost estimation practices in 12 countries - Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Slovakia, Spain, Sweden, the United Kingdom and the United States. Its findings are to be published in an OECD/NEA report entitled Cost Estimation for Decommissioning: An International Overview of Cost Elements, Estimation Practices and Reporting Requirements. This booklet highlights the findings contained in the full report. (authors)
Estimation of Conditional Quantile using Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1999-01-01
The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....
Improved diagnostic model for estimating wind energy
Energy Technology Data Exchange (ETDEWEB)
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
State and parameter estimation in bio processes
Energy Technology Data Exchange (ETDEWEB)
Maher, M.; Roux, G.; Dahhou, B. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France)]|[Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France)
1994-12-31
A major difficulty in monitoring and control of bio-processes is the lack of reliable and simple sensors for following the evolution of the main state variables and parameters such as biomass, substrate, product, growth rate, etc... In this article, an adaptive estimation algorithm is proposed to recover the state and parameters in bio-processes. This estimator utilizes the physical process model and the reference model approach. Experimentations concerning estimation of biomass and product concentrations and specific growth rate, during batch, fed-batch and continuous fermentation processes are presented. The results show the performance of this adaptive estimation approach. (authors) 12 refs.
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
2015-01-01
From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Non-Parametric Estimation of Correlation Functions
DEFF Research Database (Denmark)
Brincker, Rune; Rytter, Anders; Krenk, Steen
In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...... out, and methods to prevent bias are presented. The techniques are evaluated by comparing their speed and accuracy on the simple case of estimating auto-correlation functions for the response of a single degree-of-freedom system loaded with white noise....
Linear Covariance Analysis and Epoch State Estimators
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Frequentist Standard Errors of Bayes Estimators.
Lee, DongHyuk; Carroll, Raymond J; Sinha, Samiran
2017-09-01
Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo (MCMC) can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.
Accuracy of prehospital transport time estimation.
Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W
2014-01-01
Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.
Cost Estimating Handbook for Environmental Restoration
International Nuclear Information System (INIS)
1993-01-01
Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals
Cost Estimating Handbook for Environmental Restoration
Energy Technology Data Exchange (ETDEWEB)
NONE
1990-09-01
Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals.
L’estime de soi : un cas particulier d’estime sociale ?
Santarelli, Matteo
2016-01-01
Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...
Robust estimation of errors-in-variables models using M-estimators
Guo, Cuiping; Peng, Junhuan
2017-07-01
The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.
Spectral Estimation by the Random Dec Technique
DEFF Research Database (Denmark)
Brincker, Rune; Jensen, Jacob L.; Krenk, Steen
1990-01-01
This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...
Spectral Estimation by the Random DEC Technique
DEFF Research Database (Denmark)
Brincker, Rune; Jensen, J. Laigaard; Krenk, S.
This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to hav...
Estimating functions for inhomogeneous Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....
Comparing population size estimators for plethodontid salamanders
Bailey, L.L.; Simons, T.R.; Pollock, K.H.
2004-01-01
Despite concern over amphibian declines, few studies estimate absolute abundances because of logistic and economic constraints and previously poor estimator performance. Two estimation approaches recommended for amphibian studies are mark-recapture and depletion (or removal) sampling. We compared abundance estimation via various mark-recapture and depletion methods, using data from a three-year study of terrestrial salamanders in Great Smoky Mountains National Park. Our results indicate that short-term closed-population, robust design, and depletion methods estimate surface population of salamanders (i.e., those near the surface and available for capture during a given sampling occasion). In longer duration studies, temporary emigration violates assumptions of both open- and closed-population mark-recapture estimation models. However, if the temporary emigration is completely random, these models should yield unbiased estimates of the total population (superpopulation) of salamanders in the sampled area. We recommend using Pollock's robust design in mark-recapture studies because of its flexibility to incorporate variation in capture probabilities and to estimate temporary emigration probabilities.
Nondestructive, stereological estimation of canopy surface area
DEFF Research Database (Denmark)
Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.
2010-01-01
a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
linearity of the estimator is established under weak conditions. Indeed, we show that the bandwidth conditions employed are necessary in some cases. A bias-corrected version of the estimator is proposed and shown to be asymptotically linear under yet weaker bandwidth conditions. Consistency of an analog...
Nonparametric estimation in models for unobservable heterogeneity
Hohmann, Daniel
2014-01-01
Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.
Estimation of biochemical variables using quantumbehaved particle ...
African Journals Online (AJOL)
Due to the difficulties in the measurement of biochemical variables in fermentation process, softsensing model based on radius basis function neural network had been established for estimating the variables. To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved ...
Estimating Conditional Distributions by Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1998-01-01
Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...
Regression Equations for Birth Weight Estimation using ...
African Journals Online (AJOL)
In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...
Parameter estimation applied to physiological systems
Rideout, V.C.; Beneken, J.E.W.
Parameter estimation techniques are of ever-increasing interest in the fields of medicine and biology, as greater efforts are currently being made to describe physiological systems in explicit quantitative form. Although some of the techniques of parameter estimation as developed for use in other
Failing to Estimate the Costs of Offshoring
DEFF Research Database (Denmark)
Møller Larsen, Marcus
2016-01-01
This article investigates cost estimation errors in the context of offshoring. It is argued that an imprecise estimation of the costs related to implementing a firm activity in a foreign location has a negative impact on the process performance of that activity. Performance is deterred...
Estimating Gender Wage Gaps: A Data Update
McDonald, Judith A.; Thornton, Robert J.
2016-01-01
In the authors' 2011 "JEE" article, "Estimating Gender Wage Gaps," they described an interesting class project that allowed students to estimate the current gender earnings gap for recent college graduates using data from the National Association of Colleges and Employers (NACE). Unfortunately, since 2012, NACE no longer…
Differences between carbon budget estimates unravelled
Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Vuuren, Van Detlef P.; Riahi, Keywan; Allen, Myles; Knutti, Reto
2016-01-01
Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust
Differences between carbon budget estimates unravelled
Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Van Vuuren, Detlef P.|info:eu-repo/dai/nl/11522016X; Riahi, Keywan; Allen, Myles; Knutti, Reto
2016-01-01
Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust
Neural Network for Estimating Conditional Distribution
DEFF Research Database (Denmark)
Schiøler, Henrik; Kulczycki, P.
Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...
Estimation of physical parameters in induction motors
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1994-01-01
Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors......Parameter estimation in induction motors is a field of great interest, because accurate models are needed for robust dynamic control of induction motors...
A Bootstrap Procedure of Propensity Score Estimation
Bai, Haiyan
2013-01-01
Propensity score estimation plays a fundamental role in propensity score matching for reducing group selection bias in observational data. To increase the accuracy of propensity score estimation, the author developed a bootstrap propensity score. The commonly used propensity score matching methods: nearest neighbor matching, caliper matching, and…
Estimating Loan-to-value Distributions
DEFF Research Database (Denmark)
Korteweg, Arthur; Sørensen, Morten
2016-01-01
We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filterin...
Embedding capacity estimation of reversible watermarking schemes
Indian Academy of Sciences (India)
Abstract. Estimation of the embedding capacity is an important problem specif- ically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embed-.
Embedding capacity estimation of reversible watermarking schemes
Indian Academy of Sciences (India)
Estimation of the embedding capacity is an important problem specifically in reversible multi-pass watermarking and is required for analysis before any image can be watermarked. In this paper, we propose an efficient method for estimating the embedding capacity of a given cover image under multi-pass embedding, ...
Kalman filter to update forest cover estimates
Raymond L. Czaplewski
1990-01-01
The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...
Varieties of Quantity Estimation in Children
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-01-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…
application and comparison of groundwater recharge estimation ...
African Journals Online (AJOL)
DJFLEX
groundwater withdrawal. Estimation of recharge is also becoming more important for contaminant transport; as aquifer management expands from clean up of existing contamination to aquifer protection by delineation of areas of high recharge. Both physical and chemical methods have been employed to estimate recharge ...
An Approximate Bayesian Fundamental Frequency Estimator
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2012-01-01
Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...
New Concepts for Shipboard Sea State Estimation
DEFF Research Database (Denmark)
Nielsen, Ulrik D.; Bjerregård, Mikkel; Galeazzi, Roberto
2015-01-01
The wave buoy analogy is a tested means for shipboard sea state estimation. Basically, the estimation principle resembles that of a traditional wave rider buoy which relies, fundamentally, on transfer functions used to relate measured wave-induced responses and the unknown wave excitation. This p...
Moving Horizon Estimation on a Chip
2014-06-26
D.Q. Mayne . Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations. Automatic Control, IEEE...factor of fixed-point design [11] Christopher V Rao, James B Rawlings, and David Q Mayne . Constrained state estimation for nonlinear discrete-time
Decommissioning Cost Estimating -The ''Price'' Approach
International Nuclear Information System (INIS)
Manning, R.; Gilmour, J.
2002-01-01
Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs
Estimating Runoff Coefficients Using Weather Radars
DEFF Research Database (Denmark)
Ahm, Malte; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2012-01-01
This paper presents a method for estimating runoff coefficients of urban drainage catchments based on a combination of high resolution weather radar data and insewer flow measurements. By utilising the spatial variability of the precipitation it is possible to estimate the runoff coefficients...
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
multiangulation position estimation performance analysis using
African Journals Online (AJOL)
HOD
AOA estimation techniques [12] include multiple signal classification (MUSIC), estimation of signal parameters via rotational invariance technique (ESPRIT),. Pisarenko harmonic decomposition and Root-MUSIC. The classical techniques are based on beamforming and null-steering and require a relatively large number of.
Maximum likelihood estimation of exponential distribution under ...
African Journals Online (AJOL)
Maximum likelihood estimation of exponential distribution under type-ii censoring from imprecise data. ... Journal of Fundamental and Applied Sciences ... This paper deals with the estimation of exponential mean parameter under Type-II censoring scheme when the lifetime observations are fuzzy and are assumed to be ...
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...
Estimating light-vehicle sales in Turkey
Directory of Open Access Journals (Sweden)
Ufuk Demiroğlu
2016-09-01
Full Text Available This paper is motivated by the surprising rapid growth of new light-vehicle sales in Turkey in 2015. Domestic sales grew 25%, dramatically surpassing the industry estimates of around 8%. Our approach is to inform the sales trend estimate with the information obtained from the light-vehicle stock (the number of cars and light trucks officially registered in the country, and the scrappage data. More specifically, we improve the sales trend estimate by estimating the trend of its stock. Using household data, we show that an important reason for the rapid sales growth is that an increasing share of household budgets is spent on automobile purchases. The elasticity of light-vehicle sales to cyclical changes in aggregate demand is high and robust; its estimates are around 6 with a standard deviation of about 0.5. The price elasticity of light-vehicle sales is estimated to be about 0.8, but the estimates are imprecise and not robust. We estimate the trend level of light-vehicle sales to be roughly 7 percent of the existing stock. A remarkable out-of-sample forecast performance is obtained for horizons up to nearly a decade by a regression equation using only a cyclical gap measure, the time trend and obvious policy dummies. Various specifications suggest that the strong 2015 growth of light-vehicle sales was predictable in late 2014.
Association measures and estimation of copula parameters ...
African Journals Online (AJOL)
We apply the inversion method of estimation, with several combinations of two among the four most popular association measures, to estimate the parameters of copulas in the case of bivariate distributions. We carry out a simulation study with two examples, namely Farlie-Gumbel-Morgenstern and Marshall-Olkin ...
Adaptive blood velocity estimation in medical ultrasound
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2007-01-01
This paper investigates the use of data-adaptive spectral estimation techniques for blood velocity estimation in medical ultrasound. Current commercial systems are based on the averaged periodogram, which requires a large observation window to give sufficient spectral resolution. Herein, we propose...
Estimated water use in Puerto Rico, 2010
Molina-Rivera, Wanda L.
2014-01-01
Water-use data were aggregated for the 78 municipios of the Commonwealth of Puerto Rico for 2010. Five major offstream categories were considered: public-supply water withdrawals and deliveries, domestic and industrial self-supplied water use, crop-irrigation water use, and thermoelectric-power freshwater use. One instream water-use category also was compiled: power-generation instream water use (thermoelectric saline withdrawals and hydroelectric power). Freshwater withdrawals for offstream use from surface-water [606 million gallons per day (Mgal/d)] and groundwater (118 Mgal/d) sources in Puerto Rico were estimated at 724 million gallons per day. The largest amount of freshwater withdrawn was by public-supply water facilities estimated at 677 Mgal/d. Public-supply domestic water use was estimated at 206 Mgal/d. Fresh groundwater withdrawals by domestic self-supplied users were estimated at 2.41 Mgal/d. Industrial self-supplied withdrawals were estimated at 4.30 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 38.2 Mgal/d, or approximately 5 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 556 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,262 Mgal/d.
Fermi and the Art of Estimation
Indian Academy of Sciences (India)
IAS Admin
Fermi and the Art of Estimation. Rajaram Nityananda. Keywords. Fermi estimate, order of magni- tude, dimensional analysis. Rajaram Nityananda worked at the Raman. Research Institute in. Bangalore and the. National Centre for. Radio Astrophysics in. Pune, and has now started teaching at the. Indian Institute for.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
[Statistical estimation of parameters in allometric equations].
Zotin, A A
2000-01-01
An algorithm for estimating allometric coefficients widely used in biological studies is presented. The coefficients can be estimated only when the relationship between logarithms of the approximated data meets the linearity criterion. The proposed algorithm was applied for the brain-body weight relationship in mammals and oxygen consumption rate-body weight relationship in amphibians.
Uncertainty Measures of Regional Flood Frequency Estimators
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik
1995-01-01
Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...
TP89 - SIRZ Decomposition Spectral Estimation
Energy Technology Data Exchange (ETDEWEB)
Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.
7 CFR 58.135 - Bacterial estimate.
2010-01-01
... Specifications for Dairy Plants Approved for USDA Inspection and Grading Service 1 Quality Specifications for Raw Milk § 58.135 Bacterial estimate. (a) Methods of Testing. Milk shall be tested for bacterial estimate... representative sample of each producer's milk at least once each month at irregular intervals. Samples shall be...
Estimating Avogadro's number from skylight and airlight
Pesic, Peter
2005-01-01
Naked-eye determinations of the visual range yield order-of-magnitude estimates of Avogadro's number, using an argument of Rayleigh. Alternatively, by looking through a cardboard tube, we can compare airlight and skylight and give another estimate of this number using the law of atmospheres.
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
a vehicle driving on rough terrain. An accurate onboard estimate of vehicle mass is valuable to active safety systems as well as chassis and... Automobile Eng., Vol. 214, No. 8, pp. 851-864. Kubus, D., Kröger, T, and Wahl, F.M., (2008), „On-line estimation of inertial parameters using a
Estimating total population size for Songbirds
Jonathan Bart
2005-01-01
A conviction has developed during the past few years within the avian conservation community that estimates of total population size are needed for many species, especially ones that warrant conservation action. For example, the recently completed monitoring plans for North American shorebirds and landbirds establish estimating population size as a major objective....
Predictive efficiency of ridge regression estimator
Directory of Open Access Journals (Sweden)
Tiwari Manoj
2017-01-01
Full Text Available In this article we have considered the problem of prediction within and outside the sample for actual and average values of the study variables in case of ordinary least squares and ridge regression estimators. Finally, the performance properties of the estimators are analyzed.
Systematic Approach for Decommissioning Planning and Estimating
International Nuclear Information System (INIS)
Dam, A. S.
2002-01-01
Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises
Multisensor simultaneous vehicle tracking and shape estimation
Elfring, J.; Appeldoorn, R.; Kwakkernaat, M.
2016-01-01
This work focuses on vehicle automation applications that require both the estimation of kinematic and geometric information of surrounding vehicles, e.g., automated overtaking or merging. Rather then using one sensor that is able to estimate a vehicle's geometry from each sensor frame, e.g., a
An assessment of roadway capacity estimation methods
Minderhoud, M.M.; Botma, H.; Bovy, P.H.L.
1996-01-01
This report is an attempt to describe existing capacity estimation methods with their characteristic data demands and assumptions. After studying the methods, one should have a better idea about the capacity estimation problem which can be encountered in traffic engineering. Moreover, decisions to
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Channel Estimation in DCT-Based OFDM
Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing
2014-01-01
This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439
Quantum statistical inference for density estimation
Energy Technology Data Exchange (ETDEWEB)
Silver, R.N.; Martz, H.F.; Wallstrom, T.
1993-11-01
A new penalized likelihood method for non-parametric density estimation is proposed, which is based on a mathematical analogy to quantum statistical physics. The mathematical procedure for density estimation is related to maximum entropy methods for inverse problems; the penalty function is a convex information divergence enforcing global smoothing toward default models, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing may be enforced by constraints on the expectation values of differential operators. Although the hyperparameters, covariance, and linear response to perturbations can be estimated by a variety of statistical methods, we develop the Bayesian interpretation. The linear response of the MAP estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood. The method is demonstrated on standard data sets.
Automated effective dose estimation in CT.
García, M Sánchez; Cameán, M Pombar; Busto, R Lobato; Vega, V Luna; Sueiro, J Mosquera; Martínez, C Otero; del Río, J R Sendón
2010-01-01
European regulations require the dose delivered to patients in CT examinations to be monitored and checked against reference levels. Dose estimation has traditionally been performed manually. This is time consuming and therefore it is typically performed on just a few patients and the results extrapolated to the general case. In this work an automated method to estimate the dose in CT studies is presented. The presented software downloads CT studies from the corporative picture archiving and communication system and uses the information on the DICOM headers to perform the dose calculation. Automation enables dose estimations to be performed on a larger fraction of studies, enabling more significant comparisons with diagnostic reference levels (DRLs). A preliminary analysis involving 5800 studies is presented with details of dose distributions for selected CT protocols in use at a university hospital. Average doses are compared with DRLs. Effective dose estimations are also compared with estimations based on the dose length product.
Solar constant values for estimating solar radiation
International Nuclear Information System (INIS)
Li, Huashan; Lian, Yongwang; Wang, Xianlong; Ma, Weibin; Zhao, Liang
2011-01-01
There are many solar constant values given and adopted by researchers, leading to confusion in estimating solar radiation. In this study, some solar constant values collected from literature for estimating solar radiation with the Angstroem-Prescott correlation are tested in China using the measured data between 1971 and 2000. According to the ranking method based on the t-statistic, a strategy to select the best solar constant value for estimating the monthly average daily global solar radiation with the Angstroem-Prescott correlation is proposed. -- Research highlights: → The effect of the solar constant on estimating solar radiation is investigated. → The investigation covers a diverse range of climate and geography in China. → A strategy to select the best solar constant for estimating radiation is proposed.
COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS
Directory of Open Access Journals (Sweden)
Ricardo de Melo e Silva Accioly
2017-03-01
Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.
Spectral Velocity Estimation in the Transverse Direction
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2013-01-01
A method for estimating the velocity spectrum for a fully transverse flow at a beam-to-flow angle of 90is described. The approach is based on the transverse oscillation (TO) method, where an oscillation across the ultrasound beam is made during receive processing. A fourth-order estimator based...... on the correlation of the received signal is derived. A Fourier transform of the correlation signal yields the velocity spectrum. Performing the estimation for short data segments gives the velocity spectrum as a function of time as for ordinary spectrograms, and it also works for a beam-to-flow angle of 90...... estimation scheme can reliably find the spectrum at 90, where a traditional estimator yields zero velocity. Measurements have been conducted with the SARUS experimental scanner and a BK 8820e convex array transducer (BK Medical, Herlev, Denmark). A CompuFlow 1000 (Shelley Automation, Inc, Toronto, Canada...
Introduction to quantum-state estimation
Teo, Yong Siah
2016-01-01
Quantum-state estimation is an important field in quantum information theory that deals with the characterization of states of affairs for quantum sources. This book begins with background formalism in estimation theory to establish the necessary prerequisites. This basic understanding allows us to explore popular likelihood- and entropy-related estimation schemes that are suitable for an introductory survey on the subject. Discussions on practical aspects of quantum-state estimation ensue, with emphasis on the evaluation of tomographic performances for estimation schemes, experimental realizations of quantum measurements and detection of single-mode multi-photon sources. Finally, the concepts of phase-space distribution functions, which compatibly describe these multi-photon sources, are introduced to bridge the gap between discrete and continuous quantum degrees of freedom. This book is intended to serve as an instructive and self-contained medium for advanced undergraduate and postgraduate students to gra...
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi
2017-04-12
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
Improving Density Estimation by Incorporating Spatial Information
Directory of Open Access Journals (Sweden)
Andrea L. Bertozzi
2010-01-01
Full Text Available Given discrete event data, we wish to produce a probability density that can model the relative probability of events occurring in a spatial region. Common methods of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using these methods could result in nonnegligible portions of the support of the density in unrealistic geographic locations. For example, crime density estimation models that do not take geographic information into account may predict events in unlikely places such as oceans, mountains, and so forth. We propose a set of Maximum Penalized Likelihood Estimation methods based on Total Variation and H1 Sobolev norm regularizers in conjunction with a priori high resolution spatial data to obtain more geographically accurate density estimates. We apply this method to a residential burglary data set of the San Fernando Valley using geographic features obtained from satellite images of the region and housing density information.
Another look at the Grubbs estimators
Lombard, F.
2012-01-01
We consider estimation of the precision of a measuring instrument without the benefit of replicate observations on heterogeneous sampling units. Grubbs (1948) proposed an estimator which involves the use of a second measuring instrument, resulting in a pair of observations on each sampling unit. Since the precisions of the two measuring instruments are generally different, these observations cannot be treated as replicates. Very large sample sizes are often required if the standard error of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit variance to instrument variance is available. Our results are presented in the context of the evaluation of on-line analyzers. A data set from an analyzer evaluation is used to illustrate the methodology. © 2011 Elsevier B.V.
Depth-estimation-enabled compound eyes
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Contractor-style tunnel cost estimating
International Nuclear Information System (INIS)
Scapuzzi, D.
1990-06-01
Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs
DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA
Huang, G; Nix, AR; Armour, SMD
2010-01-01
Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...
2015-2016 Palila abundance estimates
Camp, Richard J.; Brinck, Kevin W.; Banko, Paul C.
2016-01-01
The palila (Loxioides bailleui) population was surveyed annually during 1998−2016 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2015 population was estimated at 852−1,406 birds (point estimate: 1,116) and the 2016 population was estimated at 1,494−2,385 (point estimate: 1,934). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2016; the proportion of the total annual detections in each count ranged from 46% to 56%; and there was no difference in the detection probability due to count sequence. Furthermore, conducting repeat counts improved the abundance estimates by reducing the width of the confidence intervals between 9% and 32% annually. This suggests that multiple counts do not affect bird or observer behavior and can be continued in the future to improve the precision of abundance estimates. Five palila were detected on supplemental survey stations in the Ka‘ohe restoration area, outside the core survey area but still within Palila Critical Habitat (one in 2015 and four in 2016), suggesting that palila are present in habitat that is recovering from cattle grazing on the southwest slope. The average rate of decline during 1998−2016 was 150 birds per year. Over the 18-year monitoring period, the estimated rate of change equated to a 58% decline in the population.
Can extinction rates be estimated without fossils?
Paradis, Emmanuel
2004-07-07
There is considerable interest in the possibility of using molecular phylogenies to estimate extinction rates. The present study aims at assessing the statistical performance of the birth-death model fitting approach to estimate speciation and extinction rates by comparison to the approach considering fossil data. A simulation-based approach was used. The diversification of a large number of lineages was simulated under a wide range of speciation and extinction rate values. The estimators obtained with fossils performed better than those without fossils. In the absence of fossils (e.g. with a molecular phylogeny), the speciation rate was correctly estimated in a wide range of situations; the bias of the corresponding estimator was close to zero for the largest trees. However, this estimator was substantially biased when the simulated extinction rate was high. On the other hand the estimator of extinction rate was biased in a wide range of situations. Surprisingly, this bias was lesser with medium-sized trees. Some recommendations for interpreting results from a diversification analysis are given. Copyright 2003 Elsevier Ltd.
Quantifying Uncertainty in Soil Volume Estimates
International Nuclear Information System (INIS)
Roos, A.D.; Hays, D.C.; Johnson, R.L.; Durham, L.A.; Winters, M.
2009-01-01
Proper planning and design for remediating contaminated environmental media require an adequate understanding of the types of contaminants and the lateral and vertical extent of contamination. In the case of contaminated soils, this generally takes the form of volume estimates that are prepared as part of a Feasibility Study for Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites and/or as part of the remedial design. These estimates are typically single values representing what is believed to be the most likely volume of contaminated soil present at the site. These single-value estimates, however, do not convey the level of confidence associated with the estimates. Unfortunately, the experience has been that pre-remediation soil volume estimates often significantly underestimate the actual volume of contaminated soils that are encountered during the course of remediation. This underestimation has significant implications, both technically (e.g., inappropriate remedial designs) and programmatically (e.g., establishing technically defensible budget and schedule baselines). Argonne National Laboratory (Argonne) has developed a joint Bayesian/geostatistical methodology for estimating contaminated soil volumes based on sampling results, that also provides upper and lower probabilistic bounds on those volumes. This paper evaluates the performance of this method in a retrospective study that compares volume estimates derived using this technique with actual excavated soil volumes for select Formerly Utilized Sites Remedial Action Program (FUSRAP) Maywood properties that have completed remedial action by the U.S. Army Corps of Engineers (USACE) New York District. (authors)
On accuracy of upper quantiles estimation
Directory of Open Access Journals (Sweden)
I. Markiewicz
2010-11-01
Full Text Available Flood frequency analysis (FFA entails the estimation of the upper tail of a probability density function (PDF of annual peak flows obtained from either the annual maximum series or partial duration series. In hydrological practice, the properties of various methods of upper quantiles estimation are identified with the case of known population distribution function. In reality, the assumed hypothetical model differs from the true one and one cannot assess the magnitude of error caused by model misspecification in respect to any estimated statistics. The opinion about the accuracy of the methods of upper quantiles estimation formed from the case of known population distribution function is upheld. The above-mentioned issue is the subject of the paper. The accuracy of large quantile assessments obtained from the four estimation methods is compared to two-parameter log-normal and log-Gumbel distributions and their three-parameter counterparts, i.e., three-parameter log-normal and GEV distributions. The cases of true and false hypothetical models are considered. The accuracy of flood quantile estimates depends on the sample size, the distribution type (both true and hypothetical, and strongly depends on the estimation method. In particular, the maximum likelihood method loses its advantageous properties in case of model misspecification.
Energy Technology Data Exchange (ETDEWEB)
Melaina, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Penev, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-09-01
This report compares hydrogen station cost estimates conveyed by expert stakeholders through the Hydrogen Station Cost Calculation (HSCC) to a select number of other cost estimates. These other cost estimates include projections based upon cost models and costs associated with recently funded stations.
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
Evaluating Expert Estimators Based on Elicited Competences
Directory of Open Access Journals (Sweden)
Hrvoje Karna
2015-07-01
Full Text Available Utilization of expert effort estimation approach shows promising results when it is applied to software development process. It is based on judgment and decision making process and due to comparative advantages extensively used especially in situations when classic models cannot be accounted for. This becomes even more accentuated in today’s highly dynamical project environment. Confronted with these facts companies are placing ever greater focus on their employees, specifically on their competences. Competences are defined as knowledge, skills and abilities required to perform job assignments. During effort estimation process different underlying expert competences influence the outcome i.e. judgments they express. Special problem here is the elicitation, from an input collection, of those competences that are responsible for accurate estimates. Based on these findings different measures can be taken to enhance estimation process. The approach used in study presented in this paper was targeted at elicitation of expert estimator competences responsible for production of accurate estimates. Based on individual competences scores resulting from performed modeling experts were ranked using weighted scoring method and their performance evaluated. Results confirm that experts with higher scores in competences identified by applied models in general exhibit higher accuracy during estimation process. For the purpose of modeling data mining methods were used, specifically the multilayer perceptron neural network and the classification and regression decision tree algorithms. Among other, applied methods are suitable for the purpose of elicitation as in a sense they mimic the ways human brains operate. Data used in the study was collected from real projects in the company specialized for development of IT solutions in telecom domain. The proposed model, applied methodology for elicitation of expert competences and obtained results give evidence that in
Density estimation by maximum quantum entropy
Energy Technology Data Exchange (ETDEWEB)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-11-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets.
Estimating state-contingent production functions
DEFF Research Database (Denmark)
Rasmussen, Svend; Karantininis, Kostas
The paper reviews the empirical problem of estimating state-contingent production functions. The major problem is that states of nature may not be registered and/or that the number of observation per state is low. Monte Carlo simulation is used to generate an artificial, uncertain production...... environment based on Cobb Douglas production functions with state-contingent parameters. The pa-rameters are subsequently estimated based on different sizes of samples using Generalized Least Squares and Generalized Maximum Entropy and the results are compared. It is concluded that Maximum Entropy may...... be useful, but that further analysis is needed to evaluate the efficiency of this estimation method compared to traditional methods....
Software Estimation Demystifying the Black Art
McConnell, Steve
2009-01-01
Often referred to as the "black art" because of its complexity and uncertainty, software estimation is not as difficult or puzzling as people think. In fact, generating accurate estimates is straightforward-once you understand the art of creating them. In his highly anticipated book, acclaimed author Steve McConnell unravels the mystery to successful software estimation-distilling academic information and real-world experience into a practical guide for working software professionals. Instead of arcane treatises and rigid modeling techniques, this guide highlights a proven set of procedures,
Amplitude Models for Discrimination and Yield Estimation
Energy Technology Data Exchange (ETDEWEB)
Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-01
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
Directional Transverse Oscillation Vector Flow Estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2017-01-01
A method for estimating vector velocities using transverse oscillation (TO) combined with directional beamforming is presented. In Directional Transverse Oscillation (DTO) a normal focused field is emitted and the received signals are beamformed in the lateral direction transverse to the ultrasound...... beam to increase the amount of data for vector velocity estimation. The approach is self-calibrating as the lateral oscillation period is estimated from the directional signal through a Fourier transform to yield quantitative velocity results over a large range of depths. The approach was extensively...
Estimation of quasi-critical reactivity
International Nuclear Information System (INIS)
Racz, A.
1992-02-01
The bank of Kalman filter method for reactivity and neutron density estimation originally suggested by D'Attellis and Cortina is critically overviewed. It is pointed out that the procedure cannot be applied reliably in such a form as the authors proposed, due to the filter divegence. An improved method, which is free from devergence problems are presented, as well. A new estimation technique is proposed and tested using computer simulation results. The procedure is applied for the estimation of small reactivity changes. (R.P.) 9 refs.; 2 figs.; 2 tabs
Solar radiation estimation based on the insolation
International Nuclear Information System (INIS)
Assis, F.N. de; Steinmetz, S.; Martins, S.R.; Mendez, M.E.G.
1998-01-01
A series of daily global solar radiation data measured by an Eppley pyranometer was used to test PEREIRA and VILLA NOVA’s (1997) model to estimate the potential of radiation based on the instantaneous values measured at solar noon. The model also allows to estimate the parameters of PRESCOTT’s equation (1940) assuming a = 0,29 cosj. The results demonstrated the model’s validity for the studied conditions. Simultaneously, the hypothesis of generalizing the use of the radiation estimative formulas based on insolation, and using K = Ko (0,29 cosj + 0,50 n/N), was analysed and confirmed [pt
State Energy Data Report, 1991: Consumption estimates
Energy Technology Data Exchange (ETDEWEB)
1993-05-01
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to the Government, policy makers, and the public; and (2) to provide the historical series necessary for EIA`s energy models.
State energy data report 1995 - consumption estimates
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-01
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sectors. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public, and (2) to provide the historical series necessary for EIA`s energy models.
Methodology for generating waste volume estimates
International Nuclear Information System (INIS)
Miller, J.Q.; Hale, T.; Miller, D.
1991-09-01
This document describes the methodology that will be used to calculate waste volume estimates for site characterization and remedial design/remedial action activities at each of the DOE Field Office, Oak Ridge (DOE-OR) facilities. This standardized methodology is designed to ensure consistency in waste estimating across the various sites and organizations that are involved in environmental restoration activities. The criteria and assumptions that are provided for generating these waste estimates will be implemented across all DOE-OR facilities and are subject to change based on comments received and actual waste volumes measured during future sampling and remediation activities. 7 figs., 8 tabs
Preliminary cost estimating for the nuclear industry
International Nuclear Information System (INIS)
Klumpar, I.V.; Soltz, K.M.
1985-01-01
The nuclear industry has higher costs for personnel, equipment, construction, and engineering than conventional industry, which means that cost estimation procedures may need adjustment. The authors account for the special technical and labor requirements of the nuclear industry in making adjustments to equipment and installation cost estimations. Using illustrative examples, they show that conventional methods of preliminary cost estimation are flexible enough for application to emerging industries if their cost structure is similar to that of the process industries. If not, modifications can provide enough engineering and cost data for a statistical analysis. 9 references, 14 figures, 4 tables
Multidimensional rare event probability estimation algorithm
Directory of Open Access Journals (Sweden)
Leonidas Sakalauskas
2013-09-01
Full Text Available This work contains Monte–Carlo Markov Chain algorithm for estimation of multi-dimensional rare events frequencies. Logits of rare event likelihood we are modeling with Poisson distribution, which parameters are distributed by multivariate normal law with unknown parameters – mean vector and covariance matrix. The estimations of unknown parameters are calculated by the maximum likelihood method. There are equations derived, those must be satisfied with model’s maximum likelihood parameters estimations. Positive definition of evaluated covariance matrixes are controlled by calculating ratio between matrix maximum and minimum eigenvalues.
Semi-parametric estimation for ARCH models
Directory of Open Access Journals (Sweden)
Raed Alzghool
2018-03-01
Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator
Development of realtime cognitive state estimator
International Nuclear Information System (INIS)
Takahashi, Makoto; Kitamura, Masashi; Yoshikaea, Hidekazu
2004-01-01
The realtime cognitive state estimator based on the set of physiological measures has been developed in order to provide valuable information on the human behavior during the interaction through the Man-Machine Interface. The artificial neural network has been adopted to categorize the cognitive states by using the qualitative physiological data pattern as the inputs. The laboratory experiments, in which the subjects' cognitive states were intentionally controlled by the task presented, were performed to obtain training data sets for the neural network. The developed system has been shown to be capable of estimating cognitive state with higher accuracy and realtime estimation capability has also been confirmed through the data processing experiments. (author)
State Energy Data Report, 1991: Consumption estimates
International Nuclear Information System (INIS)
1993-05-01
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining SEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. SEDS exists for two principal reasons: (1) to provide State energy consumption estimates to the Government, policy makers, and the public; and (2) to provide the historical series necessary for EIA's energy models
Estimating π using an electrical circuit
Ocaya, R. O.
2017-01-01
The constant pi, or π, is one of the oldest and most recognizable irrational constants. It was known from the earliest days of mathematics and the sciences. Over the ages, many methods of varying complexity have been used to estimate π. This article presents a novel experimental method to estimate π using direct, digital voltmeter measurements on a simple precision full-wave rectifier circuit. We discuss the method in the context of error encroachment, and suggest the possibility to estimate total harmonic distortion simply. The experiment is repeatable and of value to the undergraduate physics and electronics laboratory, while adding to a celebration of π.
On Carleman estimates with two large parameters
Energy Technology Data Exchange (ETDEWEB)
Le Rousseau, Jerome, E-mail: jlr@univ-orleans.fr [Jerome Le Rousseau. Universite d' Orleans, Laboratoire Mathematiques et Applications, Physique Mathematique d' Orleans, CNRS UMR 6628, Federation Denis-Poisson, FR CNRS 2964, B.P. 6759, 45067 Orleans cedex 2 (France)
2011-04-01
We provide a general framework for the analysis and the derivation of Carleman estimates with two large parameters. For an appropriate form of weight functions strong pseudo-convexity conditions are shown to be necessary and sufficient.
Cardiovascular risk estimation in older persons
DEFF Research Database (Denmark)
Cooney, Marie Therese; Selmer, Randi; Lindman, Anja
2016-01-01
AIMS: Estimation of cardiovascular disease risk, using SCORE (Systematic COronary Risk Evaluation) is recommended by European guidelines on cardiovascular disease prevention. Risk estimation is inaccurate in older people. We hypothesized that this may be due to the assumption, inherent in current...... risk estimation systems, that risk factors function similarly in all age groups. We aimed to derive and validate a risk estimation function, SCORE O.P., solely from data from individuals aged 65 years and older. METHODS AND RESULTS: 20,704 men and 20,121 women, aged 65 and over and without pre.......73 to 0.75). Calibration was also reasonable, Hosmer-Lemeshow goodness of fit test: 17.16 (men), 22.70 (women). Compared with the original SCORE function extrapolated to the ≥65 years age group discrimination improved, p = 0.05 (men), p risk charts were constructed. On simulated...
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...
State estimation for wave energy converters
Energy Technology Data Exchange (ETDEWEB)
Bacelli, Giorgio; Coe, Ryan Geoffrey
2017-04-01
This report gives a brief discussion and examples on the topic of state estimation for wave energy converters (WECs). These methods are intended for use to enable real-time closed loop control of WECs.
Traffic volume estimation using network interpolation techniques.
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
Time Delay Estimation Algoritms for Echo Cancellation
Directory of Open Access Journals (Sweden)
Kirill Sakhnov
2011-01-01
Full Text Available The following case study describes how to eliminate echo in a VoIP network using delay estimation algorithms. It is known that echo with long transmission delays becomes more noticeable to users. Thus, time delay estimation, as a part of echo cancellation, is an important topic during transmission of voice signals over packetswitching telecommunication systems. An echo delay problem associated with IP-based transport networks is discussed in the following text. The paper introduces the comparative study of time delay estimation algorithm, used for estimation of the true time delay between two speech signals. Experimental results of MATLab simulations that describe the performance of several methods based on cross-correlation, normalized crosscorrelation and generalized cross-correlation are also presented in the paper.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
by analysis of simulated responses of a 4 DOF system, for which the exact modal parameters are known. This estimation approach entails modal identification of the natural eigenfrequencies, mode shapes and damping ratios by the frequency domain decomposition technique. Scaled mode shapes are determined by use......When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... of the mass change method. The problem of inverting the often singular or nearly singular transfer function matrix is solved by the singular value decomposition technique using a limited number of singular values. The dependence of the eigenfrequencies on the accuracy of the scaling factors is investigated...
Climate change trade measures : estimating industry effects
2009-06-01
Estimating the potential effects of domestic emissions pricing for industries in the United States is complex. If the United States were to regulate greenhouse gas emissions, production costs could rise for certain industries and could cause output, ...
Abundance estimation of spectrally similar minerals
CSIR Research Space (South Africa)
Debba, Pravesh
2009-07-01
Full Text Available This paper evaluates a spectral unmixing method for estimating the partial abundance of spectrally similar minerals in complex mixtures. The method requires formulation of a linear function of individual spectra of individual minerals. The first...
On state estimation in electric drives
International Nuclear Information System (INIS)
Leon, A.E.; Solsona, J.A.
2010-01-01
This paper deals with state estimation in electric drives. On one hand a nonlinear observer is designed, whereas on the other hand the speed state is estimated by using the dirty derivative from the position measured. The dirty derivative is an approximate version of the perfect derivative which introduces an estimation error few times analyzed in drive applications. For this reason, our proposal in this work consists in illustrating several aspects on the performance of the dirty derivator in presence of both model uncertainties and noisy measurements. To this end, a case study is introduced. The case study considers rotor speed estimation in a permanent magnet stepper motor, by assuming that rotor position and electrical variables are measured. In addition, this paper presents comments about the connection between dirty derivators and observers, and advantages and disadvantages of both techniques are also remarked.
State Alcohol-Impaired-Driving Estimates
... 2012 Data DOT HS 812 017 May 2014 State Alcohol-Impaired-Driving Estimates This fact sheet contains ... alcohol involvement in fatal crashes for the United States and individually for the 50 States, the District ...
Global Population Density Grid Time Series Estimates
National Aeronautics and Space Administration — Global Population Density Grid Time Series Estimates provide a back-cast time series of population density grids based on the year 2000 population grid from SEDAC's...
Estimation of railroad capacity using parametric methods.
2013-12-01
This paper reviews different methodologies used for railroad capacity estimation and presents a user-friendly method to measure capacity. The objective of this paper is to use multivariate regression analysis to develop a continuous relation of the d...
Maneuver Estimation Model for Geostationary Orbit Determination
National Research Council Canada - National Science Library
Hirsch, Brian J
2006-01-01
.... The Clohessy-Wiltshire equations were used to model the relative motion of a geostationary satellite about its intended location, and a nonlinear least squares algorithm was developed to estimate the satellite trajectories.
Global Population Count Grid Time Series Estimates
National Aeronautics and Space Administration — Global Population Count Grid Time Series Estimates provide a back-cast time series of population grids based on the year 2000 population grid from SEDAC's Global...
Estimating suicide occurrence statistics using Google Trends
Directory of Open Access Journals (Sweden)
Ladislav Kristoufek
2016-11-01
Full Text Available Abstract Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms ‘depression’ and ‘suicide’ relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term ‘depression’ is related to fewer suicides, whereas a greater number of searches for the term ‘suicide’ is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences.
Reconciling Medical Expenditure Estimates from the MEPS...
U.S. Department of Health & Human Services — Reconciling Medical Expenditure Estimates from the MEPS and NHEA, 2007, published in Volume 2, Issue 4 of the Medicare and Medicaid Research Review, provides a...
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation...... with algorithmic constraints: thresholding and Chebychev approximation classes are studied, respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space, and we prove...... that assuming a certain structure of the dictionary is sufficient and (almost) necessary to obtain stronger results. We give examples of classical dictionaries in L^p spaces and modulation spaces where our results recover some known Jackson type estimates, and discuss som new estimates they provide....
Nonlinear approximation with dictionaries, I: Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
We study various approximation classes associated with $m$-term approximation by elements from a (possibly redundant) dictionary in a Banach space. The standard approximation class associated with the best $m$-term approximation is compared to new classes defined by considering $m......$-term approximation with algorithmic constraints: thresholding and Chebychev approximation classes are studied respectively. We consider embeddings of the Jackson type (direct estimates) of sparsity spaces into the mentioned approximation classes. General direct estimates are based on the geometry of the Banach space......, and we prove that assuming a certain structure of the dictionary is sufficient and (almost) necessary to obtain stronger results. We give examples of classical dictionaries in $L^p$ spaces and modulation spaces where our results recover some known Jackson type estimates, and discuss som new estimates...
Nonparametric estimation of location and scale parameters
Potgieter, C.J.
2012-12-01
Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.
Budget estimates, fiscal years 1994--1995
International Nuclear Information System (INIS)
1993-04-01
This report contains the fiscal year budget justification to Congress. The budget provides estimates for salaries and expenses and for the Office of the Inspector General for fiscal years 1994 and 1995
Food irradiation : estimates of cost of processing
International Nuclear Information System (INIS)
Krishnamurthy, K.; Bongirwar, D.R.
1987-01-01
For estimating the cost of food irradiation, three factors have to be taken into consideration. These are : (1) capital cost incurred on irradiation device and its installation, (2) recurring or running cost which includes maintenance cost and operational expenditure, and (3) product specific cost dependent on the factors specific to the food item to be processed, its storage, handling and distribution. A simple method is proposed to provide estimates of capital costs and running costs and it is applied to prepare a detailed estimate of costs for irradiation processing of onions and fish in India. The cost of processing onions worked out to be between Rs. 40 to 120 per 1000 Kg and for fish Rs 354 per 1000 Kg. These estimates do not take into account transparation costs and fluctuations in marketing procedures. (M.G.B.). 7 tables
Strichartz estimates on $alpha$-modulation spaces
Directory of Open Access Journals (Sweden)
Weichao Guo
2013-05-01
Full Text Available In this article, we consider some dispersive equations, including Schrodinger equations, nonelliptic Schrodinger equations, and wave equations. We develop some Strichartz estimates in the frame of alpha-modulation spaces.
On state estimation in electric drives
Energy Technology Data Exchange (ETDEWEB)
Leon, A.E., E-mail: aleon@ymail.co [Instituto de Investigaciones en Ingenieria Electrica (IIIE) ' Alfredo Desages' (UNS-CONICET), Departamento de Ingenieria Electrica y de Computadoras, Universidad Nacional del Sur - UNS, 1253 Alem Avenue, P.O. 8000, Bahia Blanca (Argentina); Solsona, J.A., E-mail: jsolsona@uns.edu.a [Instituto de Investigaciones en Ingenieria Electrica (IIIE) ' Alfredo Desages' (UNS-CONICET), Departamento de Ingenieria Electrica y de Computadoras, Universidad Nacional del Sur - UNS, 1253 Alem Avenue, P.O. 8000, Bahia Blanca (Argentina)
2010-03-15
This paper deals with state estimation in electric drives. On one hand a nonlinear observer is designed, whereas on the other hand the speed state is estimated by using the dirty derivative from the position measured. The dirty derivative is an approximate version of the perfect derivative which introduces an estimation error few times analyzed in drive applications. For this reason, our proposal in this work consists in illustrating several aspects on the performance of the dirty derivator in presence of both model uncertainties and noisy measurements. To this end, a case study is introduced. The case study considers rotor speed estimation in a permanent magnet stepper motor, by assuming that rotor position and electrical variables are measured. In addition, this paper presents comments about the connection between dirty derivators and observers, and advantages and disadvantages of both techniques are also remarked.
ROAD TRAFFIC ESTIMATION USING BLUETOOTH SENSORS
Directory of Open Access Journals (Sweden)
Monika N. BUGDOL
2017-09-01
Full Text Available The Bluetooth standard is a low-cost, very popular communication protocol offering a wide range of applications in many fields. In this paper, a novel system for road traffic estimation using Bluetooth sensors has been presented. The system consists of three main modules: filtration, statistical analysis of historical, and traffic estimation and prediction. The filtration module is responsible for the classification of road users and detecting measurements that should be removed. Traffic estimation has been performed on the basis of the data collected by Bluetooth measuring devices and information on external conditions (e.g., temperature, all of which have been gathered in the city of Bielsko-Biala (Poland. The obtained results are very promising. The smallest average relative error between the number of cars estimated by the model and the actual traffic was less than 10%.
A simple method to estimate interwell autocorrelation
Energy Technology Data Exchange (ETDEWEB)
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Dynamic travel time estimation using regression trees.
2008-10-01
This report presents a methodology for travel time estimation by using regression trees. The dissemination of travel time information has become crucial for effective traffic management, especially under congested road conditions. In the absence of c...
Multimodal location estimation of videos and images
Friedland, Gerald
2015-01-01
This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. · Discusses localization of multimedia data; · Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); · Covers Data-Driven as well as Semantic Location Estimation.
Error estimation and adaptivity for incompressible hyperelasticity
Whiteley, J.P.
2014-04-30
SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.
Intraocular pressure estimation using proper orthogonal decomposition
CSIR Research Space (South Africa)
Botha, N
2012-07-01
Full Text Available Glaucoma is the second leading cause of irreversible blindness. The primary indicator for glaucoma is an elevated intraocular pressure, which is estimated by means of contact or non-contact tonometry. However, these techniques do not accurately...
An estimate of global glacier volume
Directory of Open Access Journals (Sweden)
A. Grinsted
2013-01-01
Full Text Available I assess the feasibility of using multivariate scaling relationships to estimate glacier volume from glacier inventory data. Scaling laws are calibrated against volume observations optimized for the specific purpose of estimating total global glacier ice volume. I find that adjustments for continentality and elevation range improve skill of area–volume scaling. These scaling relationships are applied to each record in the Randolph Glacier Inventory, which is the first globally complete inventory of glaciers and ice caps. I estimate that the total volume of all glaciers in the world is 0.35 ± 0.07 m sea level equivalent, including ice sheet peripheral glaciers. This is substantially less than a recent state-of-the-art estimate. Area–volume scaling bias issues for large ice masses, and incomplete inventory data are offered as explanations for the difference.
A brief introduction to particle number estimation
DEFF Research Database (Denmark)
Dorph-Petersen, Karl-Anton; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
1998-01-01
The principle of particle number estimation using the disector is described emphasising the practical similarities and differences in the application of the principle in biomedicine and non-biological sciences....
Size Estimation with Night Vision Goggles
National Research Council Canada - National Science Library
Zalevski, Anna
2001-01-01
.... The results for unaided viewing were in accord with the law of size constancy that predicts that accurate size estimation is a result of observers taking into account the distance at which objects are located...
Energy estimates of cosmic ray events
International Nuclear Information System (INIS)
Dar, A.; Otterlund, I.; Stenlund, E.
1978-12-01
We propose new methods for estimating the energy of the incident particles in high energy cosmic ray collisions. We demonstrate their validity in emulsion experiments at laboratory accelerators. (author)
Estimating uncertainty of data limited stock assessments
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro
2017-01-01
Many methods exist to assess the fishing status of data-limited stocks; however, little is known about the accuracy or the uncertainty of such assessments. Here we evaluate a new size-based data-limited stock assessment method by applying it to well-assessed, data-rich fish stocks treated as data......-limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...... assessments remarkably well and the official assessments fell well within the uncertainty bounds. The estimation of spawning stock biomass followed the same trends as the official assessment, but not the same levels. We conclude that the data-limited assessment method can be used for stock assessment...
Cancer Related-Knowledge - Small Area Estimates
These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.
Cost Estimate for Gun Liner Emplacement
2011-08-01
the gun tube. (The Rowan cost estimate includes the extra thermal soak for the explosive bonding process.) The cost for the extra stress relief... nuts for optimum performance. GUIDANCE SYSTEM: Cylinder guidance is provide by the rod bushing