Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
A review of instrumental variable estimators for Mendelian randomization.
Burgess, Stephen; Small, Dylan S; Thompson, Simon G
2017-10-01
Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.
Instrumental variable estimation in a survival context
DEFF Research Database (Denmark)
Tchetgen Tchetgen, Eric J; Walter, Stefan; Vansteelandt, Stijn
2015-01-01
for regression analysis in a survival context, primarily under an additive hazards model, for which we describe 2 simple methods for estimating causal effects. The first method is a straightforward 2-stage regression approach analogous to 2-stage least squares commonly used for IV analysis in linear regression....... The IV approach is very well developed in the context of linear regression and also for certain generalized linear models with a nonlinear link function. However, IV methods are not as well developed for regression analysis with a censored survival outcome. In this article, we develop the IV approach....... In this approach, the fitted value from a first-stage regression of the exposure on the IV is entered in place of the exposure in the second-stage hazard model to recover a valid estimate of the treatment effect of interest. The second method is a so-called control function approach, which entails adding...
Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian
2009-12-01
The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.
Instrumental variable estimation of treatment effects for duration outcomes
G.E. Bijwaard (Govert)
2007-01-01
textabstractIn this article we propose and implement an instrumental variable estimation procedure to obtain treatment effects on duration outcomes. The method can handle the typical complications that arise with duration data of time-varying treatment and censoring. The treatment effect we
Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.
Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric
2018-07-01
Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.
Instrumental variables estimates of peer effects in social networks.
An, Weihua
2015-03-01
Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.
Instrumental variables estimation under a structural Cox model
DEFF Research Database (Denmark)
Martinussen, Torben; Nørbo Sørensen, Ditte; Vansteelandt, Stijn
2017-01-01
Instrumental variable (IV) analysis is an increasingly popular tool for inferring the effect of an exposure on an outcome, as witnessed by the growing number of IV applications in epidemiology, for instance. The majority of IV analyses of time-to-event endpoints are, however, dominated by heurist...
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
DEFF Research Database (Denmark)
Burgess, Stephen; Thompson, Simon G; Thompson, Grahame
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...
LARF: Instrumental Variable Estimation of Causal Effects through Local Average Response Functions
Directory of Open Access Journals (Sweden)
Weihua An
2016-07-01
Full Text Available LARF is an R package that provides instrumental variable estimation of treatment effects when both the endogenous treatment and its instrument (i.e., the treatment inducement are binary. The method (Abadie 2003 involves two steps. First, pseudo-weights are constructed from the probability of receiving the treatment inducement. By default LARF estimates the probability by a probit regression. It also provides semiparametric power series estimation of the probability and allows users to employ other external methods to estimate the probability. Second, the pseudo-weights are used to estimate the local average response function conditional on treatment and covariates. LARF provides both least squares and maximum likelihood estimates of the conditional treatment effects.
Robust best linear estimation for regression analysis using surrogate and instrumental variables.
Wang, C Y
2012-04-01
We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.
DEFF Research Database (Denmark)
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J.
2017-01-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elem...
Dunn, Abe
2016-07-01
This paper takes a different approach to estimating demand for medical care that uses the negotiated prices between insurers and providers as an instrument. The instrument is viewed as a textbook "cost shifting" instrument that impacts plan offerings, but is unobserved by consumers. The paper finds a price elasticity of demand of around -0.20, matching the elasticity found in the RAND Health Insurance Experiment. The paper also studies within-market variation in demand for prescription drugs and other medical care services and obtains comparable price elasticity estimates. Published by Elsevier B.V.
Martinussen, Torben; Vansteelandt, Stijn; Tchetgen Tchetgen, Eric J; Zucker, David M
2017-12-01
The use of instrumental variables for estimating the effect of an exposure on an outcome is popular in econometrics, and increasingly so in epidemiology. This increasing popularity may be attributed to the natural occurrence of instrumental variables in observational studies that incorporate elements of randomization, either by design or by nature (e.g., random inheritance of genes). Instrumental variables estimation of exposure effects is well established for continuous outcomes and to some extent for binary outcomes. It is, however, largely lacking for time-to-event outcomes because of complications due to censoring and survivorship bias. In this article, we make a novel proposal under a class of structural cumulative survival models which parameterize time-varying effects of a point exposure directly on the scale of the survival function; these models are essentially equivalent with a semi-parametric variant of the instrumental variables additive hazards model. We propose a class of recursive instrumental variable estimators for these exposure effects, and derive their large sample properties along with inferential tools. We examine the performance of the proposed method in simulation studies and illustrate it in a Mendelian randomization study to evaluate the effect of diabetes on mortality using data from the Health and Retirement Study. We further use the proposed method to investigate potential benefit from breast cancer screening on subsequent breast cancer mortality based on the HIP-study. © 2017, The International Biometric Society.
Pega, Frank
2016-05-01
Social epidemiologists are interested in determining the causal relationship between income and health. Natural experiments in which individuals or groups receive income randomly or quasi-randomly from financial credits (e.g., tax credits or cash transfers) are increasingly being analyzed using instrumental variable analysis. For example, in this issue of the Journal, Hamad and Rehkopf (Am J Epidemiol. 2016;183(9):775-784) used an in-work tax credit called the Earned Income Tax Credit as an instrument to estimate the association between income and child development. However, under certain conditions, the use of financial credits as instruments could violate 2 key instrumental variable analytic assumptions. First, some financial credits may directly influence health, for example, through increasing a psychological sense of welfare security. Second, financial credits and health may have several unmeasured common causes, such as politics, other social policies, and the motivation to maximize the credit. If epidemiologists pursue such instrumental variable analyses, using the amount of an unconditional, universal credit that an individual or group has received as the instrument may produce the most conceptually convincing and generalizable evidence. However, other natural income experiments (e.g., lottery winnings) and other methods that allow better adjustment for confounding might be more promising approaches for estimating the causal relationship between income and health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Puhani, Patrick A.; Weber, Andrea M.
2006-01-01
We estimate the effect of age of school entry on educational outcomes using two different data sets for Germany, sampling pupils at the end of primary school and in the middle of secondary school. Results are obtained based on instrumental variable estimation exploiting the exogenous variation in month of birth. We find robust and significant positive effects on educational outcomes for pupils who enter school at seven instead of six years of age: Test scores at the end of primary school incr...
Clifton, G. T.; Merrill, J. T.; Johnson, B. J.; Oltmans, S. J.
2009-12-01
Ozonesondes provide information on the ozone distribution up to the middle stratosphere. Ozone profiles often feature layers, with vertically discrete maxima and minima in the mixing ratio. Layers are especially common in the UT/LS regions and originate from wave breaking, shearing and other transport processes. ECC sondes, however, have a moderate response time to significant changes in ozone. A sonde can ascend over 350 meters before it responds fully to a step change in ozone. This results in an overestimate of the altitude assigned to layers and an underestimate of the underlying variability in the amount of ozone. An estimate of the response time is made for each instrument during the preparation for flight, but the profile data are typically not processed to account for the response. Here we present a method of categorizing the response time of ECC instruments and an analysis of a low-pass filter approximation to the effects on profile data. Exponential functions were fit to the step-up and step-down responses using laboratory data. The resulting response time estimates were consistent with results from standard procedures, with the up-step response time exceeding the down-step value somewhat. A single-pole Butterworth filter that approximates the instrumental effect was used with synthetic layered profiles to make first-order estimates of the impact of the finite response time. Using a layer analysis program previously applied to observed profiles we find that instrumental effects can attenuate ozone variability by 20-45% in individual layers, but that the vertical offset in layer altitudes is moderate, up to about 150 meters. We will present results obtained using this approach, coupled with data on the distribution of layer characteristics found using the layer analysis procedure on profiles from Narragansett, Rhode Island and other US sites to quantify the impact on overall variability estimates given ambient distributions of layer occurrence, thickness
Staley, James R.
2017-01-01
ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167
Instrumental Variables in the Long Run
DEFF Research Database (Denmark)
Casey, Gregory; Klemp, Marc Patrick Brag
2017-01-01
In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Directory of Open Access Journals (Sweden)
Evropi Theodoratou
Full Text Available Vitamin D deficiency has been associated with several common diseases, including cancer and is being investigated as a possible risk factor for these conditions. We reported the striking prevalence of vitamin D deficiency in Scotland. Previous epidemiological studies have reported an association between low dietary vitamin D and colorectal cancer (CRC. Using a case-control study design, we tested the association between plasma 25-hydroxy-vitamin D (25-OHD and CRC (2,001 cases, 2,237 controls. To determine whether plasma 25-OHD levels are causally linked to CRC risk, we applied the control function instrumental variable (IV method of the mendelian randomization (MR approach using four single nucleotide polymorphisms (rs2282679, rs12785878, rs10741657, rs6013897 previously shown to be associated with plasma 25-OHD. Low plasma 25-OHD levels were associated with CRC risk in the crude model (odds ratio (OR: 0.76, 95% Confidence Interval (CI: 0.71, 0.81, p: 1.4×10(-14 and after adjusting for age, sex and other confounding factors. Using an allele score that combined all four SNPs as the IV, the estimated causal effect was OR 1.16 (95% CI 0.60, 2.23, whilst it was 0.94 (95% CI 0.46, 1.91 and 0.93 (0.53, 1.63 when using an upstream (rs12785878, rs10741657 and a downstream allele score (rs2282679, rs6013897, respectively. 25-OHD levels were inversely associated with CRC risk, in agreement with recent meta-analyses. The fact that this finding was not replicated when the MR approach was employed might be due to weak instruments, giving low power to demonstrate an effect (<0.35. The prevalence and degree of vitamin D deficiency amongst individuals living in northerly latitudes is of considerable importance because of its relationship to disease. To elucidate the effect of vitamin D on CRC cancer risk, additional large studies of vitamin D and CRC risk are required and/or the application of alternative methods that are less sensitive to weak instrument
National Oceanic and Atmospheric Administration, Department of Commerce — A method for estimation of Doppler spectrum, its moments, and polarimetric variables on pulsed weather radars which uses over sampled echo components at a rate...
Robotic-surgical instrument wrist pose estimation.
Fabel, Stephan; Baek, Kyungim; Berkelman, Peter
2010-01-01
The Compact Lightweight Surgery Robot from the University of Hawaii includes two teleoperated instruments and one endoscope manipulator which act in accord to perform assisted interventional medicine. The relative positions and orientations of the robotic instruments and endoscope must be known to the teleoperation system so that the directions of the instrument motions can be controlled to correspond closely to the directions of the motions of the master manipulators, as seen by the the endoscope and displayed to the surgeon. If the manipulator bases are mounted in known locations and all manipulator joint variables are known, then the necessary coordinate transformations between the master and slave manipulators can be easily computed. The versatility and ease of use of the system can be increased, however, by allowing the endoscope or instrument manipulator bases to be moved to arbitrary positions and orientations without reinitializing each manipulator or remeasuring their relative positions. The aim of this work is to find the pose of the instrument end effectors using the video image from the endoscope camera. The P3P pose estimation algorithm is used with a Levenberg-Marquardt optimization to ensure convergence. The correct transformations between the master and slave coordinate frames can then be calculated and updated when the bases of the endoscope or instrument manipulators are moved to new, unknown, positions at any time before or during surgical procedures.
Econometrics in outcomes research: the use of instrumental variables.
Newhouse, J P; McClellan, M
1998-01-01
We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.
Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis
2017-06-23
Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
On the Interpretation of Instrumental Variables in the Presence of Specification Errors
Directory of Open Access Journals (Sweden)
P.A.V.B. Swamy
2015-01-01
Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.
Trani, Jean-Francois; Bakhshi, Parul; Brown, Derek; Lopez, Dominique; Gall, Fiona
2018-05-25
The capability approach pioneered by Amartya Sen and Martha Nussbaum offers a new paradigm to examine disability, poverty and their complex associations. Disability is hence defined as a situation in which a person with an impairment faces various forms of restrictions in functionings and capabilities. Additionally, poverty is not the mere absence of income but a lack of ability to achieve essential functionings; disability is consequently the poverty of capabilities of persons with impairment. It is the lack of opportunities in a given context and agency that leads to persons with disabilities being poorer than other social groups. Consequently, poverty of people with disabilities comprises of complex processes of social exclusion and disempowerment. Despite growing evidence that persons with disabilities face higher levels of poverty, the literature from low and middle-income countries that analyzes the causal link between disability and poverty, remains limited. Drawing on data from a large case control field survey carried out between December 24th , 2013 and February 16th , 2014 in Tunisia and between November 4th , 2013 and June 12th , 2014 in Morocco, we examined the effect of impairment on various basic capabilities, health related quality of life and multidimensional poverty - indicators of poor wellbeing-in Morocco and Tunisia. To demonstrate a causal link between impairment and deprivation of capabilities, we used instrumental variable regression analyses. In both countries, we found lower access to jobs for persons with impairment. Health related quality of life was also lower for this group who also faced a higher risk of multidimensional poverty. There was no significant direct effect of impairment on access to school and acquiring literacy in both countries, and on access to health care and expenses in Tunisia, while having an impairment reduced access to healthcare facilities in Morocco and out of pocket expenditures. These results suggest that
Reardon, Sean F.; Unlu, Faith; Zhu, Pei; Bloom, Howard
2013-01-01
We explore the use of instrumental variables (IV) analysis with a multi-site randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, as assumption known in the instrumental variables literature as the…
CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE
Directory of Open Access Journals (Sweden)
Nino Serdarevic
2012-10-01
Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.
Combining within and between instrument information to estimate precision
International Nuclear Information System (INIS)
Jost, J.W.; Devary, J.L.; Ward, J.E.
1980-01-01
When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision
Instrumental variable methods in comparative safety and effectiveness research.
Brookhart, M Alan; Rassen, Jeremy A; Schneeweiss, Sebastian
2010-06-01
Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial.
Instrumental variable methods in comparative safety and effectiveness research†
Brookhart, M. Alan; Rassen, Jeremy A.; Schneeweiss, Sebastian
2010-01-01
Summary Instrumental variable (IV) methods have been proposed as a potential approach to the common problem of uncontrolled confounding in comparative studies of medical interventions, but IV methods are unfamiliar to many researchers. The goal of this article is to provide a non-technical, practical introduction to IV methods for comparative safety and effectiveness research. We outline the principles and basic assumptions necessary for valid IV estimation, discuss how to interpret the results of an IV study, provide a review of instruments that have been used in comparative effectiveness research, and suggest some minimal reporting standards for an IV analysis. Finally, we offer our perspective of the role of IV estimation vis-à-vis more traditional approaches based on statistical modeling of the exposure or outcome. We anticipate that IV methods will be often underpowered for drug safety studies of very rare outcomes, but may be potentially useful in studies of intended effects where uncontrolled confounding may be substantial. PMID:20354968
Evaluating disease management programme effectiveness: an introduction to instrumental variables.
Linden, Ariel; Adams, John L
2006-04-01
This paper introduces the concept of instrumental variables (IVs) as a means of providing an unbiased estimate of treatment effects in evaluating disease management (DM) programme effectiveness. Model development is described using zip codes as the IV. Three diabetes DM outcomes were evaluated: annual diabetes costs, emergency department (ED) visits and hospital days. Both ordinary least squares (OLS) and IV estimates showed a significant treatment effect for diabetes costs (P = 0.011) but neither model produced a significant treatment effect for ED visits. However, the IV estimate showed a significant treatment effect for hospital days (P = 0.006) whereas the OLS model did not. These results illustrate the utility of IV estimation when the OLS model is sensitive to the confounding effect of hidden bias.
International Nuclear Information System (INIS)
Allafi, Walid; Uddin, Kotub; Zhang, Cheng; Mazuir Raja Ahsan Sha, Raja; Marco, James
2017-01-01
Highlights: •Off-line estimation approach for continuous-time domain for non-invertible function. •Model reformulated to multi-input-single-output; nonlinearity described by sigmoid. •Method directly estimates parameters of nonlinear ECM from the measured-data. •Iterative on-line technique leads to smoother convergence. •The model is validated off-line and on-line using NCA battery. -- Abstract: The accuracy of identifying the parameters of models describing lithium ion batteries (LIBs) in typical battery management system (BMS) applications is critical to the estimation of key states such as the state of charge (SoC) and state of health (SoH). In applications such as electric vehicles (EVs) where LIBs are subjected to highly demanding cycles of operation and varying environmental conditions leading to non-trivial interactions of ageing stress factors, this identification is more challenging. This paper proposes an algorithm that directly estimates the parameters of a nonlinear battery model from measured input and output data in the continuous time-domain. The simplified refined instrumental variable method is extended to estimate the parameters of a Wiener model where there is no requirement for the nonlinear function to be invertible. To account for nonlinear battery dynamics, in this paper, the typical linear equivalent circuit model (ECM) is enhanced by a block-oriented Wiener configuration where the nonlinear memoryless block following the typical ECM is defined to be a sigmoid static nonlinearity. The nonlinear Weiner model is reformulated in the form of a multi-input, single-output linear model. This linear form allows the parameters of the nonlinear model to be estimated using any linear estimator such as the well-established least squares (LS) algorithm. In this paper, the recursive least square (RLS) method is adopted for online parameter estimation. The approach was validated on experimental data measured from an 18650-type Graphite
Sensitivity analysis and power for instrumental variable studies.
Wang, Xuran; Jiang, Yang; Zhang, Nancy R; Small, Dylan S
2018-03-31
In observational studies to estimate treatment effects, unmeasured confounding is often a concern. The instrumental variable (IV) method can control for unmeasured confounding when there is a valid IV. To be a valid IV, a variable needs to be independent of unmeasured confounders and only affect the outcome through affecting the treatment. When applying the IV method, there is often concern that a putative IV is invalid to some degree. We present an approach to sensitivity analysis for the IV method which examines the sensitivity of inferences to violations of IV validity. Specifically, we consider sensitivity when the magnitude of association between the putative IV and the unmeasured confounders and the direct effect of the IV on the outcome are limited in magnitude by a sensitivity parameter. Our approach is based on extending the Anderson-Rubin test and is valid regardless of the strength of the instrument. A power formula for this sensitivity analysis is presented. We illustrate its usage via examples about Mendelian randomization studies and its implications via a comparison of using rare versus common genetic variants as instruments. © 2018, The International Biometric Society.
Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.
Pizer, Steven D
2016-04-01
To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.
The productivity of mental health care: an instrumental variable approach.
Lu, Mingshan
1999-06-01
BACKGROUND: Like many other medical technologies and treatments, there is a lack of reliable evidence on treatment effectiveness of mental health care. Increasingly, data from non-experimental settings are being used to study the effect of treatment. However, as in a number of studies using non-experimental data, a simple regression of outcome on treatment shows a puzzling negative and significant impact of mental health care on the improvement of mental health status, even after including a large number of potential control variables. The central problem in interpreting evidence from real-world or non-experimental settings is, therefore, the potential "selection bias" problem in observational data set. In other words, the choice/quantity of mental health care may be correlated with other variables, particularly unobserved variables, that influence outcome and this may lead to a bias in the estimate of the effect of care in conventional models. AIMS OF THE STUDY: This paper addresses the issue of estimating treatment effects using an observational data set. The information in a mental health data set obtained from two waves of data in Puerto Rico is explored. The results using conventional models - in which the potential selection bias is not controlled - and that from instrumental variable (IV) models - which is what was proposed in this study to correct for the contaminated estimation from conventional models - are compared. METHODS: Treatment effectiveness is estimated in a production function framework. Effectiveness is measured as the improvement in mental health status. To control for the potential selection bias problem, IV approaches are employed. The essence of the IV method is to use one or more instruments, which are observable factors that influence treatment but do not directly affect patient outcomes, to isolate the effect of treatment variation that is independent of unobserved patient characteristics. The data used in this study are the first (1992
Power calculator for instrumental variable analysis in pharmacoepidemiology.
Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M
2017-10-01
Instrumental variable analysis, for example with physicians' prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Reardon, Sean F.; Unlu, Fatih; Zhu, Pei; Bloom, Howard S.
2014-01-01
We explore the use of instrumental variables (IV) analysis with a multisite randomized trial to estimate the effect of a mediating variable on an outcome in cases where it can be assumed that the observed mediator is the only mechanism linking treatment assignment to outcomes, an assumption known in the IV literature as the exclusion restriction.…
Estimation of biochemical variables using quantumbehaved particle ...
African Journals Online (AJOL)
To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...
Swanson, Sonja A; Labrecque, Jeremy; Hernán, Miguel A
2018-05-02
Sometimes instrumental variable methods are used to test whether a causal effect is null rather than to estimate the magnitude of a causal effect. However, when instrumental variable methods are applied to time-varying exposures, as in many Mendelian randomization studies, it is unclear what causal null hypothesis is tested. Here, we consider different versions of causal null hypotheses for time-varying exposures, show that the instrumental variable conditions alone are insufficient to test some of them, and describe additional assumptions that can be made to test a wider range of causal null hypotheses, including both sharp and average causal null hypotheses. Implications for interpretation and reporting of instrumental variable results are discussed.
Observer variability in estimating numbers: An experiment
Erwin, R.M.
1982-01-01
Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.
Instrumented Impact Testing: Influence of Machine Variables and Specimen Position
Energy Technology Data Exchange (ETDEWEB)
Lucon, E.; McCowan, C. N.; Santoyo, R. A.
2008-09-15
An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.
Instrumented Impact Testing: Influence of Machine Variables and Specimen Position
International Nuclear Information System (INIS)
Lucon, E.; McCowan, C. N.; Santoyo, R. A.
2008-01-01
An investigation has been conducted on the influence of impact machine variables and specimen positioning on characteristic forces and absorbed energies from instrumented Charpy tests. Brittle and ductile fracture behavior has been investigated by testing NIST reference samples of low, high and super-high energy levels. Test machine variables included tightness of foundation, anvil and striker bolts, and the position of the center of percussion with respect to the center of strike. For specimen positioning, we tested samples which had been moved away or sideways with respect to the anvils. In order to assess the influence of the various factors, we compared mean values in the reference (unaltered) and altered conditions; for machine variables, t-test analyses were also performed in order to evaluate the statistical significance of the observed differences. Our results indicate that the only circumstance which resulted in variations larger than 5 percent for both brittle and ductile specimens is when the sample is not in contact with the anvils. These findings should be taken into account in future revisions of instrumented Charpy test standards.
The contextual effects of social capital on health: a cross-national instrumental variable analysis.
Kim, Daniel; Baum, Christopher F; Ganz, Michael L; Subramanian, S V; Kawachi, Ichiro
2011-12-01
Past research on the associations between area-level/contextual social capital and health has produced conflicting evidence. However, interpreting this rapidly growing literature is difficult because estimates using conventional regression are prone to major sources of bias including residual confounding and reverse causation. Instrumental variable (IV) analysis can reduce such bias. Using data on up to 167,344 adults in 64 nations in the European and World Values Surveys and applying IV and ordinary least squares (OLS) regression, we estimated the contextual effects of country-level social trust on individual self-rated health. We further explored whether these associations varied by gender and individual levels of trust. Using OLS regression, we found higher average country-level trust to be associated with better self-rated health in both women and men. Instrumental variable analysis yielded qualitatively similar results, although the estimates were more than double in size in both sexes when country population density and corruption were used as instruments. The estimated health effects of raising the percentage of a country's population that trusts others by 10 percentage points were at least as large as the estimated health effects of an individual developing trust in others. These findings were robust to alternative model specifications and instruments. Conventional regression and to a lesser extent IV analysis suggested that these associations are more salient in women and in women reporting social trust. In a large cross-national study, our findings, including those using instrumental variables, support the presence of beneficial effects of higher country-level trust on self-rated health. Previous findings for contextual social capital using traditional regression may have underestimated the true associations. Given the close linkages between self-rated health and all-cause mortality, the public health gains from raising social capital within and across
Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable
Albertus Girik Allo
2016-01-01
Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI), quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB). The first data set, 1985-2013 is used to estimate the role of fin...
Essential climatic variables estimation with satellite imagery
Kolotii, A.; Kussul, N.; Shelestov, A.; Lavreniuk, M. S.
2016-12-01
According to Sendai Framework for Disaster Risk Reduction 2015 - 2030 Leaf Area Index (LAI) is considered as one of essential climatic variables. This variable represents the amount of leaf material in ecosystems and controls the links between biosphere and atmosphere through various processes and enables monitoring and quantitative assessment of vegetation state. LAI has added value for such important global resources monitoring tasks as drought mapping and crop yield forecasting with use of data from different sources [1-2]. Remote sensing data from space can be used to estimate such biophysical parameter at regional and national scale. High temporal satellite imagery is usually required to capture main parameters of crop growth [3]. Sentinel-2 mission launched in 2015 be ESA is a source of high spatial and temporal resolution satellite imagery for mapping biophysical parameters. Products created with use of automated Sen2-Agri system deployed during Sen2-Agri country level demonstration project for Ukraine will be compared with our independent results of biophysical parameters mapping. References Shelestov, A., Kolotii, A., Camacho, F., Skakun, S., Kussul, O., Lavreniuk, M., & Kostetsky, O. (2015, July). Mapping of biophysical parameters based on high resolution EO imagery for JECAM test site in Ukraine. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 1733-1736 Kolotii, A., Kussul, N., Shelestov, A., Skakun, S., Yailymov, B., Basarab, R., ... & Ostapenko, V. (2015). Comparison of biophysical and satellite predictors for wheat yield forecasting in Ukraine. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 40(7), 39-44. Kussul, N., Lemoine, G., Gallego, F. J., Skakun, S. V., Lavreniuk, M., & Shelestov, A. Y. Parcel-Based Crop Classification in Ukraine Using Landsat-8 Data and Sentinel-1A Data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , 9 (6), 2500-2508.
Reliability Estimation for Digital Instrument/Control System
International Nuclear Information System (INIS)
Yang, Yaguang; Sydnor, Russell
2011-01-01
Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method
Reliability Estimation for Digital Instrument/Control System
Energy Technology Data Exchange (ETDEWEB)
Yang, Yaguang; Sydnor, Russell [U.S. Nuclear Regulatory Commission, Washington, D.C. (United States)
2011-08-15
Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method.
Estimates of genetic variability in mutated population of triticum aestivum
International Nuclear Information System (INIS)
Larik, A.S.; Siddiqui, K.A.; Soomoro, A.H.
1980-01-01
M 2 populations of four cultivars of Mexican origin (Mexipak-65, Nayab, Pak-70 and 6134 x C-271) and two locally bred cultivars (H-68 and C-591) of bread wheat, triticum aestivum (2n = 6x = AA BB DD) derived from six irradiation treatments (gamma rays 60sub(Co); 10, 15 and 20 kR and fast neutrons; 300, 600 and 900 RADS) were critically examined for spike length, spikelets per spike, grains per spike and grain yield. Genotypes varied significantly (p>=0.01) for all the characters. Irradiation treatment were instrumental in creating significant variability for all the characters, indicating that varieties did not perform uniformly across different gamma rays as well as fast neutron treatments. In the M 2 generation there was a considerable increase in variance for all the four metrical traits. Comparisons were made between controls and treated populations. Mutagenic treatments shifted the mean values mostly towards the negative direction, but the shift was not unidirectional nor equally effective for all the characters. The differences in mean values and the nature of variability observed in M 2 indicated a possible preference of selection M 3 generation. In general, estimates of genetic variability and heritability (b.s) increased with increasing doses of gamma rays and fast neutrons. Genetic advance also exhibited similar trend. The observed variability can be utilized in the evolution of new varieties. (authors)
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Persson, Eva K; Dykes, Anna-Karin
2009-08-01
to evaluate dimensions of both parents' postnatal sense of security the first week after childbirth, and to determine associations between the PPSS instrument and different sociodemographic and situational background variables. evaluative, cross-sectional design. 113 mothers and 99 fathers with children live born at term, from five hospitals in southern Sweden. mothers and fathers had similar feelings concerning postnatal sense of security. Of the dimensions in the PPSS instrument, a sense of midwives'/nurses' empowering behaviour, a sense of one's own general well-being and a sense of the mother's well-being as experienced by the father were the most important dimensions for parents' experienced security. A sense of affinity within the family (for both parents) and a sense of manageable breast feeding (for mothers) were not significantly associated with their experienced security. A sense of participation during pregnancy and general anxiety were significantly associated background variables for postnatal sense of security for both parents. For the mothers, parity and a sense that the father was participating during pregnancy were also significantly associated. more focus on parents' participation during pregnancy as well as midwives'/nurses' empowering behaviour during the postnatal period will be beneficial for both parents' postnatal sense of security.
Instrument Variables for Reducing Noise in Parallel MRI Reconstruction
Directory of Open Access Journals (Sweden)
Yuchou Chang
2017-01-01
Full Text Available Generalized autocalibrating partially parallel acquisition (GRAPPA has been a widely used parallel MRI technique. However, noise deteriorates the reconstructed image when reduction factor increases or even at low reduction factor for some noisy datasets. Noise, initially generated from scanner, propagates noise-related errors during fitting and interpolation procedures of GRAPPA to distort the final reconstructed image quality. The basic idea we proposed to improve GRAPPA is to remove noise from a system identification perspective. In this paper, we first analyze the GRAPPA noise problem from a noisy input-output system perspective; then, a new framework based on errors-in-variables (EIV model is developed for analyzing noise generation mechanism in GRAPPA and designing a concrete method—instrument variables (IV GRAPPA to remove noise. The proposed EIV framework provides possibilities that noiseless GRAPPA reconstruction could be achieved by existing methods that solve EIV problem other than IV method. Experimental results show that the proposed reconstruction algorithm can better remove the noise compared to the conventional GRAPPA, as validated with both of phantom and in vivo brain data.
Estimates and sampling schemes for the instrumentation of accountability systems
International Nuclear Information System (INIS)
Jewell, W.S.; Kwiatkowski, J.W.
1976-10-01
The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered
Cawley, John
2015-01-01
The method of instrumental variables (IV) is useful for estimating causal effects. Intuitively, it exploits exogenous variation in the treatment, sometimes called natural experiments or instruments. This study reviews the literature in health-services research and medical research that applies the method of instrumental variables, documents trends in its use, and offers examples of various types of instruments. A literature search of the PubMed and EconLit research databases for English-language journal articles published after 1990 yielded a total of 522 original research articles. Citations counts for each article were derived from the Web of Science. A selective review was conducted, with articles prioritized based on number of citations, validity and power of the instrument, and type of instrument. The average annual number of papers in health services research and medical research that apply the method of instrumental variables rose from 1.2 in 1991-1995 to 41.8 in 2006-2010. Commonly-used instruments (natural experiments) in health and medicine are relative distance to a medical care provider offering the treatment and the medical care provider's historic tendency to administer the treatment. Less common but still noteworthy instruments include randomization of treatment for reasons other than research, randomized encouragement to undertake the treatment, day of week of admission as an instrument for waiting time for surgery, and genes as an instrument for whether the respondent has a heritable condition. The use of the method of IV has increased dramatically in the past 20 years, and a wide range of instruments have been used. Applications of the method of IV have in several cases upended conventional wisdom that was based on correlations and led to important insights about health and healthcare. Future research should pursue new applications of existing instruments and search for new instruments that are powerful and valid.
Institution, Financial Sector, and Economic Growth: Use The Institutions As An Instrument Variable
Directory of Open Access Journals (Sweden)
Albertus Girik Allo
2016-06-01
Full Text Available Institution has been investigated having indirect role on economic growth. This paper aims to evaluate whether the quality of institution matters for economic growth. By applying institution as instrumental variable at Foreign Direct Investment (FDI, quality of institution significantly influence economic growth. This study applies two set of data period, namely 1985-2013 and 2000-2013, available online in the World Bank (WB. The first data set, 1985-2013 is used to estimate the role of financial sector on economic growth, focuses on 67 countries. The second data set, 2000-2013 determine the role of institution on financial sector and economic growth by applying 2SLS estimation method. We define institutional variables as set of indicators: Control of Corruption, Political Stability and Absence of Violence, and Voice and Accountability provide declining impact of FDI to economic growth.
Pollen parameters estimates of genetic variability among newly ...
African Journals Online (AJOL)
Pollen parameters estimates of genetic variability among newly selected Nigerian roselle (Hibiscus sabdariffa L.) genotypes. ... Estimates of some pollen parameters where used to assess the genetic diversity among ... HOW TO USE AJOL.
Estimating Search Engine Index Size Variability
DEFF Research Database (Denmark)
Van den Bosch, Antal; Bogers, Toine; De Kunder, Maurice
2016-01-01
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel...... method of estimating the size of a Web search engine’s index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing’s indices over a nine-year period, from March 2006...... until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find...
use of genetic variability estimates and interrelationships
African Journals Online (AJOL)
Prof. Adipala Ekwamu
of 11 agronomic and biochemical traits to water stress based on estimation of genetic ... of primary branches and 100 seed weight under W0, and number of primary ... selection of superior drought-tolerant genotype (LR1) with good yield ...
Centile estimation for a proportion response variable.
Hossain, Abu; Rigby, Robert; Stasinopoulos, Mikis; Enea, Marco
2016-03-15
This paper introduces two general models for computing centiles when the response variable Y can take values between 0 and 1, inclusive of 0 or 1. The models developed are more flexible alternatives to the beta inflated distribution. The first proposed model employs a flexible four parameter logit skew Student t (logitSST) distribution to model the response variable Y on the unit interval (0, 1), excluding 0 and 1. This model is then extended to the inflated logitSST distribution for Y on the unit interval, including 1. The second model developed in this paper is a generalised Tobit model for Y on the unit interval, including 1. Applying these two models to (1-Y) rather than Y enables modelling of Y on the unit interval including 0 rather than 1. An application of the new models to real data shows that they can provide superior fits. Copyright © 2015 John Wiley & Sons, Ltd.
26 CFR 1.1275-5 - Variable rate debt instruments.
2010-04-01
... nonpublicly traded property. A debt instrument (other than a tax-exempt obligation) that would otherwise... variations in the cost of newly borrowed funds in the currency in which the debt instrument is denominated... on the yield of actively traded personal property (within the meaning of section 1092(d)(1)). (ii...
The Effect of Birth Weight on Academic Performance: Instrumental Variable Analysis.
Lin, Shi Lin; Leung, Gabriel Matthew; Schooling, C Mary
2017-05-01
Observationally, lower birth weight is usually associated with poorer academic performance; whether this association is causal or the result of confounding is unknown. To investigate this question, we obtained an effect estimate, which can have a causal interpretation under specific assumptions, of birth weight on educational attainment using instrumental variable analysis based on single nucleotide polymorphisms determining birth weight combined with results from the Social Science Genetic Association Consortium study of 126,559 Caucasians. We similarly obtained an estimate of the effect of birth weight on academic performance in 4,067 adolescents from Hong Kong's (Chinese) Children of 1997 birth cohort (1997-2016), using twin status as an instrumental variable. Birth weight was not associated with years of schooling (per 100-g increase in birth weight, -0.006 years, 95% confidence interval (CI): -0.02, 0.01) or college completion (odds ratio = 1.00, 95% CI: 0.96, 1.03). Birth weight was also unrelated to academic performance in adolescents (per 100-g increase in birth weight, -0.004 grade, 95% CI: -0.04, 0.04) using instrumental variable analysis, although conventional regression gave a small positive association (0.02 higher grade, 95% CI: 0.01, 0.03). Observed associations of birth weight with academic performance may not be causal, suggesting that interventions should focus on the contextual factors generating this correlation. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Variable selection and estimation for longitudinal survey data
Wang, Li
2014-09-01
There is wide interest in studying longitudinal surveys where sample subjects are observed successively over time. Longitudinal surveys have been used in many areas today, for example, in the health and social sciences, to explore relationships or to identify significant variables in regression settings. This paper develops a general strategy for the model selection problem in longitudinal sample surveys. A survey weighted penalized estimating equation approach is proposed to select significant variables and estimate the coefficients simultaneously. The proposed estimators are design consistent and perform as well as the oracle procedure when the correct submodel was known. The estimating function bootstrap is applied to obtain the standard errors of the estimated parameters with good accuracy. A fast and efficient variable selection algorithm is developed to identify significant variables for complex longitudinal survey data. Simulated examples are illustrated to show the usefulness of the proposed methodology under various model settings and sampling designs. © 2014 Elsevier Inc.
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
Estimating net present value variability for deterministic models
van Groenendaal, W.J.H.
1995-01-01
For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large,
Wesołowska, Karolina; Elovainio, Marko; Hintsa, Taina; Jokela, Markus; Pulkki-Råback, Laura; Pitkänen, Niina; Lipsanen, Jari; Tukiainen, Janne; Lyytikäinen, Leo-Pekka; Lehtimäki, Terho; Juonala, Markus; Raitakari, Olli; Keltikangas-Järvinen, Liisa
2017-12-01
Type 2 diabetes (T2D) has been associated with depressive symptoms, but the causal direction of this association and the underlying mechanisms, such as increased glucose levels, remain unclear. We used instrumental-variable regression with a genetic instrument (Mendelian randomization) to examine a causal role of increased glucose concentrations in the development of depressive symptoms. Data were from the population-based Cardiovascular Risk in Young Finns Study (n = 1217). Depressive symptoms were assessed in 2012 using a modified Beck Depression Inventory (BDI-I). Fasting glucose was measured concurrently with depressive symptoms. A genetic risk score for fasting glucose (with 35 single nucleotide polymorphisms) was used as an instrumental variable for glucose. Glucose was not associated with depressive symptoms in the standard linear regression (B = -0.04, 95% CI [-0.12, 0.04], p = .34), but the instrumental-variable regression showed an inverse association between glucose and depressive symptoms (B = -0.43, 95% CI [-0.79, -0.07], p = .020). The difference between the estimates of standard linear regression and instrumental-variable regression was significant (p = .026) CONCLUSION: Our results suggest that the association between T2D and depressive symptoms is unlikely to be caused by increased glucose concentrations. It seems possible that T2D might be linked to depressive symptoms due to low glucose levels.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Elovainio, Marko; Heponiemi, Tarja; Kuusio, Hannamaria; Jokela, Markus; Aalto, Anna-Mari; Pekkarinen, Laura; Noro, Anja; Finne-Soveri, Harriet; Kivimäki, Mika; Sinervo, Timo
2015-02-01
The association between psychosocial work environment and employee wellbeing has repeatedly been shown. However, as environmental evaluations have typically been self-reported, the observed associations may be attributable to reporting bias. Applying instrumental-variable regression, we used staffing level (the ratio of staff to residents) as an unconfounded instrument for self-reported job demands and job strain to predict various indicators of wellbeing (perceived stress, psychological distress and sleeping problems) among 1525 registered nurses, practical nurses and nursing assistants working in elderly care wards. In ordinary regression, higher self-reported job demands and job strain were associated with increased risk of perceived stress, psychological distress and sleeping problems. The effect estimates for the associations of these psychosocial factors with perceived stress and psychological distress were greater, but less precisely estimated, in an instrumental-variables analysis which took into account only the variation in self-reported job demands and job strain that was explained by staffing level. No association between psychosocial factors and sleeping problems was observed with the instrumental-variable analysis. These results support a causal interpretation of high self-reported job demands and job strain being risk factors for employee wellbeing. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...
Optimal Inference for Instrumental Variables Regression with non-Gaussian Errors
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper is concerned with inference on the coefficient on the endogenous regressor in a linear instrumental variables model with a single endogenous regressor, nonrandom exogenous regressors and instruments, and i.i.d. errors whose distribution is unknown. It is shown that under mild smoothness...
Estimating water equivalent snow depth from related meteorological variables
International Nuclear Information System (INIS)
Steyaert, L.T.; LeDuc, S.K.; Strommen, N.D.; Nicodemus, M.L.; Guttman, N.B.
1980-05-01
Engineering design must take into consideration natural loads and stresses caused by meteorological elements, such as, wind, snow, precipitation and temperature. The purpose of this study was to determine a relationship of water equivalent snow depth measurements to meteorological variables. Several predictor models were evaluated for use in estimating water equivalent values. These models include linear regression, principal component regression, and non-linear regression models. Linear, non-linear and Scandanavian models are used to generate annual water equivalent estimates for approximately 1100 cooperative data stations where predictor variables are available, but which have no water equivalent measurements. These estimates are used to develop probability estimates of snow load for each station. Map analyses for 3 probability levels are presented
Directory of Open Access Journals (Sweden)
Johan Håkon Bjørngaard
Full Text Available While high body mass index is associated with an increased risk of depression and anxiety, cumulative evidence indicates that it is a protective factor for suicide. The associations from conventional observational studies of body mass index with mental health outcomes are likely to be influenced by reverse causality or confounding by ill-health. In the present study, we investigated the associations between offspring body mass index and parental anxiety, depression and suicide in order to avoid problems with reverse causality and confounding by ill-health.We used data from 32,457 mother-offspring and 27,753 father-offspring pairs from the Norwegian HUNT-study. Anxiety and depression were assessed using the Hospital Anxiety and Depression Scale and suicide death from national registers. Associations between offspring and own body mass index and symptoms of anxiety and depression and suicide mortality were estimated using logistic and Cox regression. Causal effect estimates were estimated with a two sample instrument variable approach using offspring body mass index as an instrument for parental body mass index.Both own and offspring body mass index were positively associated with depression, while the results did not indicate any substantial association between body mass index and anxiety. Although precision was low, suicide mortality was inversely associated with own body mass index and the results from the analysis using offspring body mass index supported these results. Adjusted odds ratios per standard deviation body mass index from the instrumental variable analysis were 1.22 (95% CI: 1.05, 1.43 for depression, 1.10 (95% CI: 0.95, 1.27 for anxiety, and the instrumental variable estimated hazard ratios for suicide was 0.69 (95% CI: 0.30, 1.63.The present study's results indicate that suicide mortality is inversely associated with body mass index. We also found support for a positive association between body mass index and depression, but not
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
kVp estimate intercomparison between Unfors XI, Radcal 4075 and a new CDTN multipurpose instrument
International Nuclear Information System (INIS)
Baptista Neto, A.T.; Oliveira, B.B.; Faria, L.O.
2015-01-01
In this work we compare the kVp estimate between CDTN multipurpose instrument, UnforsXI and Radcal 4075 meters under different combinations of voltage and filtration. The non-invasively measurements made using x-ray diagnostic and interventional radiology devices show similar tendencies to increase the kVp estimate when aluminum filters are placed in the path of the x-ray beam. The results reveal that the kVp estimate made by the CDTN multipurpose instrument is always satisfactory for highly filtered beam intensities. - Highlights: • We compare the kVp estimate between CDTN instrument and 2 different kVp meters. • The new CDTN multipurpose instrument performance was found to be satisfactory. • All instruments increase kVp estimative for increasing additional filtration. • They are suitable for quality control routines in x-ray diagnostic radiology
Auditory/visual distance estimation: accuracy and variability
Directory of Open Access Journals (Sweden)
Paul Wallace Anderson
2014-10-01
Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Antimicrobial breakpoint estimation accounting for variability in pharmacokinetics
Directory of Open Access Journals (Sweden)
Nekka Fahima
2009-06-01
Full Text Available Abstract Background Pharmacokinetic and pharmacodynamic (PK/PD indices are increasingly being used in the microbiological field to assess the efficacy of a dosing regimen. In contrast to methods using MIC, PK/PD-based methods reflect in vivo conditions and are more predictive of efficacy. Unfortunately, they entail the use of one PK-derived value such as AUC or Cmax and may thus lead to biased efficiency information when the variability is large. The aim of the present work was to evaluate the efficacy of a treatment by adjusting classical breakpoint estimation methods to the situation of variable PK profiles. Methods and results We propose a logical generalisation of the usual AUC methods by introducing the concept of "efficiency" for a PK profile, which involves the efficacy function as a weight. We formulated these methods for both classes of concentration- and time-dependent antibiotics. Using drug models and in silico approaches, we provide a theoretical basis for characterizing the efficiency of a PK profile under in vivo conditions. We also used the particular case of variable drug intake to assess the effect of the variable PK profiles generated and to analyse the implications for breakpoint estimation. Conclusion Compared to traditional methods, our weighted AUC approach gives a more powerful PK/PD link and reveals, through examples, interesting issues about the uniqueness of therapeutic outcome indices and antibiotic resistance problems.
Estimating variability in functional images using a synthetic resampling approach
International Nuclear Information System (INIS)
Maitra, R.; O'Sullivan, F.
1996-01-01
Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods
Directory of Open Access Journals (Sweden)
Gu NY
2008-12-01
Full Text Available There are limited studies on quantifying the impact of patient satisfaction with pharmacist consultation on patient medication adherence. Objectives: The objective of this study is to evaluate the effect of patient satisfaction with pharmacist consultation services on medication adherence in a large managed care organization. Methods: We analyzed data from a patient satisfaction survey of 6,916 patients who had used pharmacist consultation services in Kaiser Permanente Southern California from 1993 to 1996. We compared treating patient satisfaction as exogenous, in a single-equation probit model, with a bivariate probit model where patient satisfaction was treated as endogenous. Different sets of instrumental variables were employed, including measures of patients' emotional well-being and patients' propensity to fill their prescriptions at a non-Kaiser Permanente (KP pharmacy. The Smith-Blundell test was used to test whether patient satisfaction was endogenous. Over-identification tests were used to test the validity of the instrumental variables. The Staiger-Stock weak instrument test was used to evaluate the explanatory power of the instrumental variables. Results: All tests indicated that the instrumental variables method was valid and the instrumental variables used have significant explanatory power. The single equation probit model indicated that the effect of patient satisfaction with pharmacist consultation was significant (p<0.010. However, the bivariate probit models revealed that the marginal effect of pharmacist consultation on medication adherence was significantly greater than the single equation probit. The effect increased from 7% to 30% (p<0.010 after controlling for endogeneity bias. Conclusion: After appropriate adjustment for endogeneity bias, patients satisfied with their pharmacy services are substantially more likely to adhere to their medication. The results have important policy implications given the increasing focus
Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin
2018-05-01
The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.
International Nuclear Information System (INIS)
Kustas, W.P.; Prueger, J.H.; Hipps, L.E.; Hatfield, J.L.; Meek, D.
1998-01-01
Studies of surface energy and water balance generally require an accurate estimate of net radiation and its spatial distribution. A project quantifying both short term and seasonal water use of shrub and grass vegetation in the Jornada Experimental Range in New Mexico prompted a study to compare net radiation observations using two types of net radiometers currently being used in research. A set of 12 REBS net radiometers were compared with each other and one Swissteco, over wet and dry surfaces in an arid landscape under clear skies. The set of REBS exhibited significant differences in output over both surfaces. However, they could be cross calibrated to yield values within 10 W m −2 , on average. There was also a significant bias between the REBS and Swissteco over a dry surface, but not over a wet one. The two makes of instrument could be made to agree under the dry conditions by using regression or autoregression techniques. However, the resulting equations would induce bias for the wet surface condition. Thus, it is not possible to cross calibrate these two makes of radiometer over the range of environmental conditions observed. This result indicates that determination of spatial distribution of net radiation over a variable surface should be made with identical instruments which have been cross calibrated. The need still exists for development of a radiometer and calibration procedures which will produce accurate and consistent measurements over a range of surface conditions. (author)
Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako
2012-01-01
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Finite-sample instrumental variables inference using an asymptotically pivotal statistic
Bekker, P; Kleibergen, F
2003-01-01
We consider the K-statistic, Kleibergen's (2002, Econometrica 70, 1781-1803) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Whereas Kleibergen (2002) especially analyzes the asymptotic behavior of the statistic, we focus on finite-sample properties in, a
Finite-sample instrumental variables Inference using an Asymptotically Pivotal Statistic
Bekker, P.; Kleibergen, F.R.
2001-01-01
The paper considers the K-statistic, Kleibergen’s (2000) adaptation ofthe Anderson-Rubin (AR) statistic in instrumental variables regression.Compared to the AR-statistic this K-statistic shows improvedasymptotic efficiency in terms of degrees of freedom in overidentifiedmodels and yet it shares,
Finite-sample instrumental variables inference using an asymptotically pivotal statistic
Bekker, Paul A.; Kleibergen, Frank
2001-01-01
The paper considers the K-statistic, Kleibergen’s (2000) adaptation of the Anderson-Rubin (AR) statistic in instrumental variables regression. Compared to the AR-statistic this K-statistic shows improved asymptotic efficiency in terms of degrees of freedom in overidenti?ed models and yet it shares,
Klein, T.J.
2013-01-01
Recent studies debate how the unobserved dependence between the monetary return to college education and selection into college can be characterised. This paper examines this question using British data. We develop a semiparametric local instrumental variables estimator for identified features of a
MEASURING INSTRUMENT CONSTRUCTION AND VALIDATION IN ESTIMATING UNICYCLING SKILL LEVEL
Directory of Open Access Journals (Sweden)
Ivan Granić
2012-09-01
Full Text Available Riding the unicycle presupposes the knowledge of the set of elements which describe motoric skill, or just part of that set with which we could measure the level of that knowledge. Testing and evaluation of the elements is time consuming. In order to design a unique, composite measuring instrument, to facilitate the evaluation of the initial level of unicycling skill, we tested 17 recreative subjects who were learning to ride the unicycle in 15 hours of training, without any previous knowledge or experience what was measured before the beginning of the training. At the beginning and at the end of the training they were tested with the set of the 12 riding elements test that was carried out to record only successful attempts, followed by unique SLALOM test which include previously tested elements. It was found that the unique SLALOM test has good metric features and a high regression coefficient showed that the SLALOM could be used instead of the 12 elements of unicycle riding skill, and it could be used as a uniform test to evaluate learned or existing knowledge. Because of its simplicity in terms of action and simultaneous testing of more subjects, the newly constructed test could be used in evaluating the unicycling recreational level, but also for monitoring and programming transformation processes to develop the motor skills of riding of unicycle. Because of its advantages, it is desirable to include unicycling in the educational processes of learning new motor skills, which can be evaluated by the results of this research. The obtained results indicate that the unicycle should be seriously consider as a training equipment to “refresh” or expand the recreational programs, without any fear that it is just for special people. Namely, it was shown that the previously learned motor skills (skiing, roller-skating, and cycling had no effect on the results of final testing.
Estimation of road profile variability from measured vehicle responses
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
Multiengine Speech Processing Using SNR Estimator in Variable Noisy Environments
Directory of Open Access Journals (Sweden)
Ahmad R. Abu-El-Quran
2012-01-01
Full Text Available We introduce a multiengine speech processing system that can detect the location and the type of audio signal in variable noisy environments. This system detects the location of the audio source using a microphone array; the system examines the audio first, determines if it is speech/nonspeech, then estimates the value of the signal to noise (SNR using a Discrete-Valued SNR Estimator. Using this SNR value, instead of trying to adapt the speech signal to the speech processing system, we adapt the speech processing system to the surrounding environment of the captured speech signal. In this paper, we introduced the Discrete-Valued SNR Estimator and a multiengine classifier, using Multiengine Selection or Multiengine Weighted Fusion. Also we use the SI as example of the speech processing. The Discrete-Valued SNR Estimator achieves an accuracy of 98.4% in characterizing the environment's SNR. Compared to a conventional single engine SI system, the improvement in accuracy was as high as 9.0% and 10.0% for the Multiengine Selection and Multiengine Weighted Fusion, respectively.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
K vp estimate intercomparison between Unfors XI, Radcal 4075 and a new CDTN multipurpose instrument
International Nuclear Information System (INIS)
Baptista N, A. T.; Oliveira, B. B.; Faria, L. O.
2014-08-01
This work compares results obtained using 3 (three) different instruments capable of non-invasively estimating the voltage applied to the electrodes of an x-ray emission equipment, namely the Unfors model Xi R/F, the Radcal Corporation model 4075 R/F and a new CDTN multipurpose instrument. Tests were carried out using the Pantak Seifert Model 320 Hs x-ray machine with equal setups for all instruments undergoing comparison. Irradiations were performed for different conditions of voltage and filtration. Although all instruments show a similar tendency to increase the k Vp estimate when aluminum filters are placed in the path of the x-ray beam, they may all be satisfactorily adopted in quality control routines of x-ray equipment by means of estimation of the applied voltage. The importance of using equally calibrated measurement instruments and according to manufacturers instructions became clear; in case it is not possible to follow these requirements, measurement-correcting methods must be applied. Using the new multipurpose instrument, the k Vp estimate is satisfactory even if the x-ray beam intensity is filtered in approximately one-tenth value layer. (author)
K vp estimate intercomparison between Unfors XI, Radcal 4075 and a new CDTN multipurpose instrument
Energy Technology Data Exchange (ETDEWEB)
Baptista N, A. T.; Oliveira, B. B.; Faria, L. O., E-mail: annibal@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear - CNEN, Av. Presidente Antonio Carlos 6627, Campus UFMG, Pampulha, CEP 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2014-08-15
This work compares results obtained using 3 (three) different instruments capable of non-invasively estimating the voltage applied to the electrodes of an x-ray emission equipment, namely the Unfors model Xi R/F, the Radcal Corporation model 4075 R/F and a new CDTN multipurpose instrument. Tests were carried out using the Pantak Seifert Model 320 Hs x-ray machine with equal setups for all instruments undergoing comparison. Irradiations were performed for different conditions of voltage and filtration. Although all instruments show a similar tendency to increase the k Vp estimate when aluminum filters are placed in the path of the x-ray beam, they may all be satisfactorily adopted in quality control routines of x-ray equipment by means of estimation of the applied voltage. The importance of using equally calibrated measurement instruments and according to manufacturers instructions became clear; in case it is not possible to follow these requirements, measurement-correcting methods must be applied. Using the new multipurpose instrument, the k Vp estimate is satisfactory even if the x-ray beam intensity is filtered in approximately one-tenth value layer. (author)
Borgen, Nicolai T
2014-11-01
This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.
The XRF spectrometer and the selection of analysis conditions (instrumental variables)
International Nuclear Information System (INIS)
Willis, J.P.
2002-01-01
Full text: This presentation will begin with a brief discussion of EDXRF and flat- and curved-crystal WDXRF spectrometers, contrasting the major differences between the three types. The remainder of the presentation will contain a detailed overview of the choice and settings of the many instrumental variables contained in a modern WDXRF spectrometer, and will discuss critically the choices facing the analyst in setting up a WDXRF spectrometer for different elements and applications. In particular it will discuss the choice of tube target (when a choice is possible), the kV and mA settings, tube filters, collimator masks, collimators, analyzing crystals, secondary collimators, detectors, pulse height selection, X-ray path medium (air, nitrogen, vacuum or helium), counting times for peak and background positions and their effect on counting statistics and lower limit of detection (LLD). The use of Figure of Merit (FOM) calculations to objectively choose the best combination of instrumental variables also will be discussed. This presentation will be followed by a shorter session on a subsequent day entitled - A Selection of XRF Conditions - Practical Session, where participants will be given the opportunity to discuss in groups the selection of the best instrumental variables for three very diverse applications. Copyright (2002) Australian X-ray Analytical Association Inc
Directory of Open Access Journals (Sweden)
Jambulingam Subramani
2013-10-01
Full Text Available The present paper deals with a modified ratio estimator for estimation of population mean of the study variable when the population median of the auxiliary variable is known. The bias and mean squared error of the proposed estimator are derived and are compared with that of existing modified ratio estimators for certain known populations. Further we have also derived the conditions for which the proposed estimator performs better than the existing modified ratio estimators. From the numerical study it is also observed that the proposed modified ratio estimator performs better than the existing modified ratio estimators for certain known populations.
International Nuclear Information System (INIS)
Prieur, G.; Nadi, M.; Hedjiedj, A.; Weber, S.
1995-01-01
This second chapter on instrumentation gives little general consideration on history and classification of instrumentation, and two specific states of the art. The first one concerns NMR (block diagram of instrumentation chain with details on the magnets, gradients, probes, reception unit). The first one concerns precision instrumentation (optical fiber gyro-meter and scanning electron microscope), and its data processing tools (programmability, VXI standard and its history). The chapter ends with future trends on smart sensors and Field Emission Displays. (D.L.). Refs., figs
Sharma, Nivita D
2017-09-01
Several explanations for the inconsistent results on the effects of breastfeeding on childhood asthma have been suggested. The purpose of this study was to investigate one unexplored explanation, which is the presence of a potential endogenous relationship between breastfeeding and childhood asthma. Endogeneity exists when an explanatory variable is correlated with the error term for reasons such as selection bias, reverse causality, and unmeasured confounders. Unadjusted endogeneity will bias the effect of breastfeeding on childhood asthma. To investigate potential endogeneity, a cross-sectional study of breastfeeding practices and incidence of childhood asthma in 87 pediatric patients in Georgia, the USA, was conducted using generalized linear modeling and a two-stage instrumental variable analysis. First, the relationship between breastfeeding and childhood asthma was analyzed without considering endogeneity. Second, tests for presence of endogeneity were performed and having detected endogeneity between breastfeeding and childhood asthma, a two-stage instrumental variable analysis was performed. The first stage of this analysis estimated the duration of breastfeeding and the second-stage estimated the risk of childhood asthma. When endogeneity was not taken into account, duration of breastfeeding was found to significantly increase the risk of childhood asthma (relative risk ratio [RR]=2.020, 95% confidence interval [CI]: [1.143-3.570]). After adjusting for endogeneity, duration of breastfeeding significantly reduced the risk of childhood asthma (RR=0.003, 95% CI: [0.000-0.240]). The findings suggest that researchers should consider evaluating how the presence of endogeneity could affect the relationship between duration of breastfeeding and the risk of childhood asthma. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.
International Nuclear Information System (INIS)
Decreton, M.
2000-01-01
SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised
Energy Technology Data Exchange (ETDEWEB)
Decreton, M
2001-04-01
SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor.
International Nuclear Information System (INIS)
Decreton, M.
2001-01-01
SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan; Genton, Marc G.
2010-01-01
which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n
Estimations of natural variability between satellite measurements of trace species concentrations
Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.
2017-12-01
In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.
International Nuclear Information System (INIS)
Decreton, M.
2002-01-01
SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described
Energy Technology Data Exchange (ETDEWEB)
Decreton, M
2002-04-01
SCK-CEN's R and D programme on instrumentation involves the development of advanced instrumentation systems for nuclear applications as well as the assessment of the performance of these instruments in a radiation environment. Particular emphasis is on the use of optical fibres as umbilincal links of a remote handling unit for use during maintanance of a fusion reacor, studies on the radiation hardening of plasma diagnostic systems; investigations on new instrumentation for the future MYRRHA accelerator driven system; space applications related to radiation-hardened lenses; the development of new approaches for dose, temperature and strain measurements; the assessment of radiation-hardened sensors and motors for remote handling tasks and studies of dose measurement systems including the use of optical fibres. Progress and achievements in these areas for 2001 are described.
Energy Technology Data Exchange (ETDEWEB)
Decreton, M
2000-07-01
SCK-CEN's research and development programme on instrumentation aims at evaluating the potentials of new instrumentation technologies under the severe constraints of a nuclear application. It focuses on the tolerance of sensors to high radiation doses, including optical fibre sensors, and on the related intelligent data processing needed to cope with the nuclear constraints. Main achievements in these domains in 1999 are summarised.
Estimation of power system variability due to wind power
Papaefthymiou, G.; Verboomen, J.; Van der Sluis, L.
2007-01-01
The incorporation of wind power generation to the power system leads to an increase in the variability of the system power flows. The assessment of this variability is necessary for the planning of the necessary system reinforcements. For the assessment of this variability, the uncertainty in the
Brown, C.; Carriquiry, M.; Souza Filho, F. A.
2006-12-01
Hydroclimatological variability presents acute challenges to urban water supply providers. The impact is often most severe in developing nations where hydrologic and climate variability can be very high, water demand is unmet and increasing, and the financial resources to mitigate the social effects of that variability are limited. Furthermore, existing urban water systems face a reduced solution space, constrained by competing and conflicting interests, such as irrigation demand, recreation and hydropower production, and new (relative to system design) demands to satisfy environmental flow requirements. These constraints magnify the impacts of hydroclimatic variability and increase the vulnerability of urban areas to climate change. The high economic and social costs of structural responses to hydrologic variability, such as groundwater utilization and the construction or expansion of dams, create a need for innovative alternatives. Advances in hydrologic and climate forecasting, and the increasing sophistication and acceptance of incentive-based mechanisms for achieving economically efficient water allocation offer potential for improving the resilience of existing water systems to the challenge of variable supply. This presentation will explore the performance of a system of climate informed economic instruments designed to facilitate the reduction of hydroclimatologic variability-induced impacts on water-sensitive stakeholders. The system is comprised of bulk water option contracts between urban water suppliers and agricultural users and insurance indexed on reservoir inflows designed to cover the financial needs of the water supplier in situations where the option is likely to be exercised. Contract and insurance parameters are linked to forecasts and the evolution of seasonal precipitation and streamflow and designed for financial and political viability. A simulation of system performance is presented based on ongoing work in Metro Manila, Philippines. The
International Nuclear Information System (INIS)
Umminger, K.
2008-01-01
A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.
International Nuclear Information System (INIS)
Buehrer, W.
1996-01-01
The present paper mediates a basic knowledge of the most commonly used experimental techniques. We discuss the principles and concepts necessary to understand what one is doing if one performs an experiment on a certain instrument. (author) 29 figs., 1 tab., refs
Estimating variability in placido-based topographic systems.
Kounis, George A; Tsilimbaris, Miltiadis K; Kymionis, George D; Ginis, Harilaos S; Pallikaris, Ioannis G
2007-10-01
To describe a new software tool for the detailed presentation of corneal topography measurements variability by means of color-coded maps. Software was developed in Visual Basic to analyze and process a series of 10 consecutive measurements obtained by a topographic system on calibration spheres, and individuals with emmetropic, low, high, and irregular astigmatic corneas. Corneal surface was segmented into 1200 segments and the coefficient of variance of each segment's keratometric dioptric power was used as the measure of variability. The results were presented graphically in color-coded maps (Variability Maps). Two topographic systems, the TechnoMed C-Scan and the TOMEY Topographic Modeling System (TMS-2N), were examined to demonstrate our method. Graphic representation of coefficient of variance offered a detailed representation of examination variability both in calibration surfaces and human corneas. It was easy to recognize an increase in variability, as the irregularity of examination surfaces increased. In individuals with high and irregular astigmatism, a variability pattern correlated with the pattern of corneal topography: steeper corneal areas possessed higher variability values compared with flatter areas of the same cornea. Numerical data permitted direct comparisons and statistical analysis. We propose a method that permits a detailed evaluation of the variability of corneal topography measurements. The representation of the results both graphically and quantitatively improves interpretability and facilitates a spatial correlation of variability maps with original topography maps. Given the popularity of topography based custom refractive ablations of the cornea, it is possible that variability maps may assist clinicians in the evaluation of corneal topography maps of patients with very irregular corneas, before custom ablation procedures.
A Tool for Estimating Variability in Wood Preservative Treatment Retention
Patricia K. Lebow; Adam M. Taylor; Timothy M. Young
2015-01-01
Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...
International Nuclear Information System (INIS)
Muehllehner, G.; Colsher, J.G.
1982-01-01
This chapter reviews the parameters which are important to positron-imaging instruments. It summarizes the options which various groups have explored in designing tomographs and the methods which have been developed to overcome some of the limitations inherent in the technique as well as in present instruments. The chapter is not presented as a defense of positron imaging versus single-photon or other imaging modality, neither does it contain a description of various existing instruments, but rather stresses their common properties and problems. Design parameters which are considered are resolution, sampling requirements, sensitivity, methods of eliminating scattered radiation, random coincidences and attenuation. The implementation of these parameters is considered, with special reference to sampling, choice of detector material, detector ring diameter and shielding and variations in point spread function. Quantitation problems discussed are normalization, and attenuation and random corrections. Present developments mentioned are noise reduction through time-of-flight-assisted tomography and signal to noise improvements through high intrinsic resolution. Extensive bibliography. (U.K.)
Energy Technology Data Exchange (ETDEWEB)
Viskari, T.
2012-07-01
Atmospheric aerosol particles have several important effects on the environment and human society. The exact impact of aerosol particles is largely determined by their particle size distributions. However, no single instrument is able to measure the whole range of the particle size distribution. Estimating a particle size distribution from multiple simultaneous measurements remains a challenge in aerosol physical research. Current methods to combine different measurements require assumptions concerning the overlapping measurement ranges and have difficulties in accounting for measurement uncertainties. In this thesis, Extended Kalman Filter (EKF) is presented as a promising method to estimate particle number size distributions from multiple simultaneous measurements. The particle number size distribution estimated by EKF includes information from prior particle number size distributions as propagated by a dynamical model and is based on the reliabilities of the applied information sources. Known physical processes and dynamically evolving error covariances constrain the estimate both over time and particle size. The method was tested with measurements from Differential Mobility Particle Sizer (DMPS), Aerodynamic Particle Sizer (APS) and nephelometer. The particle number concentration was chosen as the state of interest. The initial EKF implementation presented here includes simplifications, yet the results are positive and the estimate successfully incorporated information from the chosen instruments. For particle sizes smaller than 4 micrometers, the estimate fits the available measurements and smooths the particle number size distribution over both time and particle diameter. The estimate has difficulties with particles larger than 4 micrometers due to issues with both measurements and the dynamical model in that particle size range. The EKF implementation appears to reduce the impact of measurement noise on the estimate, but has a delayed reaction to sudden
Fletcher, Jason M
2015-07-01
This paper provides some of the first evidence of peer effects in college enrollment decisions. There are several empirical challenges in assessing the influences of peers in this context, including the endogeneity of high school, shared group-level unobservables, and identifying policy-relevant parameters of social interactions models. This paper addresses these issues by using an instrumental variables/fixed effects approach that compares students in the same school but different grade-levels who are thus exposed to different sets of classmates. In particular, plausibly exogenous variation in peers' parents' college expectations are used as an instrument for peers' college choices. Preferred specifications indicate that increasing a student's exposure to college-going peers by ten percentage points is predicted to raise the student's probability of enrolling in college by 4 percentage points. This effect is roughly half the magnitude of growing up in a household with married parents (vs. an unmarried household). Copyright © 2015 Elsevier Inc. All rights reserved.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Agirdas, Cagdas; Krebs, Robert J; Yano, Masato
2018-01-08
One goal of the Affordable Care Act is to increase insurance coverage by improving competition and lowering premiums. To facilitate this goal, the federal government enacted online marketplaces in the 395 rating areas spanning 34 states that chose not to establish their own state-run marketplaces. Few multivariate regression studies analyzing the effects of competition on premiums suffer from endogeneity, due to simultaneity and omitted variable biases. However, United Healthcare's decision to enter these marketplaces in 2015 provides the researcher with an opportunity to address this endogeneity problem. Exploiting the variation caused by United Healthcare's entry decision as an instrument for competition, we study the impact of competition on premiums during the first 2 years of these marketplaces. Combining panel data from five different sources and controlling for 12 variables, we find that one more insurer in a rating area leads to a 6.97% reduction in the second-lowest-priced silver plan premium, which is larger than the estimated effects in existing literature. Furthermore, we run a threshold analysis and find that competition's effects on premiums become statistically insignificant if there are four or more insurers in a rating area. These findings are robust to alternative measures of premiums, inclusion of a non-linear term in the regression models and a county-level analysis.
Directory of Open Access Journals (Sweden)
Isa Mona
2016-01-01
Full Text Available This paper is a preliminary study on rationalising green office building investments in Malaysia. The aim of this paper is attempt to introduce the application of Rasch measurement model analysis to determine the validity and reliability of each construct in the questionnaire. In achieving this objective, a questionnaire survey was developed consists of 6 sections and a total of 106 responses were received from various investors who own and lease office buildings in Kuala Lumpur. The Rasch Measurement analysis is used to measure the quality control of item constructs in the instrument by measuring the specific objectivity within the same dimension, to reduce ambiguous measures, and a realistic estimation of precision and implicit quality. The Rasch analysis consists of the summary statistics, item unidimensionality and item measures. A result shows the items and respondent (person reliability is at 0.91 and 0.95 respectively.
Luque, Pablo; Mántaras, Daniel A.; Fidalgo, Eloy; Álvarez, Javier; Riva, Paolo; Girón, Pablo; Compadre, Diego; Ferran, Jordi
2013-12-01
The main objective of this work is to determine the limit of safe driving conditions by identifying the maximal friction coefficient in a real vehicle. The study will focus on finding a method to determine this limit before reaching the skid, which is valuable information in the context of traffic safety. Since it is not possible to measure the friction coefficient directly, it will be estimated using the appropriate tools in order to get the most accurate information. A real vehicle is instrumented to collect information of general kinematics and steering tie-rod forces. A real-time algorithm is developed to estimate forces and aligning torque in the tyres using an extended Kalman filter and neural networks techniques. The methodology is based on determining the aligning torque; this variable allows evaluation of the behaviour of the tyre. It transmits interesting information from the tyre-road contact and can be used to predict the maximal tyre grip and safety margin. The maximal grip coefficient is estimated according to a knowledge base, extracted from computer simulation of a high detailed three-dimensional model, using Adams® software. The proposed methodology is validated and applied to real driving conditions, in which maximal grip and safety margin are properly estimated.
agronomic performance and estimate of genetic variability of upland ...
African Journals Online (AJOL)
Admin
importance of rice, it has many industrial uses. For example ... environmental constraints. Particularly ... of Variance (ANOVA) according to Gomez and Gomez. (1984) and ... selection of genotypes for increased grain yield. For grain ..... yield components in wheat, Crop Science ... variability, stability and correlation studies in.
Hemispherical photography to estimate biophysical variables of cotton
Directory of Open Access Journals (Sweden)
Ziany N. Brandão
Full Text Available ABSTRACT The Leaf Area Index (LAI is a key parameter to evaluate the vegetation spectral response, estimating plant nutrition and water requirements. However, in large fields is difficult to obtain accurate data to LAI determination. Therefore, the objective of this study was the estimation of LAI, biomass and yield of irrigated cotton through digital hemispherical photography. The treatments consisted of four nitrogen doses (0, 90, 180 and 270 kg ha-1 and four phosphorus doses (0, 120, 240 and 360 kg ha-1. Digital hemispherical photographs were collected under similar sky brightness conditions at 60 and 75 days after emergence (DAE, performed by the Digital Plant Canopy Imager - CI-110® of CID Inc. Biomass and LAI measurements were made on the same dates. LAI was also determined by destructive and non-destructive methods through a leaf area integrator (LI-COR® -LI-3100C model, and by measurements based on the midrib length of all leaves, respectively. The results indicate that the hemispherical images were appropriate to estimate the LAI and biomass production of irrigated cotton, while for the estimation of yield, more research is needed to improve the method.
Klein, T.J.
2009-01-01
Recent studies debate how the unobserved dependence between the monetary return to college education and selection into college can be characterized. This paper examines this question using British data. We develop a semiparametric local instrumental variables estimator for identified features of a
International Nuclear Information System (INIS)
Filippini, Massimo; Hunt, Lester C.; Zorić, Jelena
2014-01-01
The promotion of energy efficiency is seen as one of the top priorities of EU energy policy (EC, 2010). In order to design and implement effective energy policy instruments, it is necessary to have information on energy demand price and income elasticities in addition to sound indicators of energy efficiency. This research combines the approaches taken in energy demand modelling and frontier analysis in order to econometrically estimate the level of energy efficiency for the residential sector in the EU-27 member states for the period 1996 to 2009. The estimates for the energy efficiency confirm that the EU residential sector indeed holds a relatively high potential for energy savings from reduced inefficiency. Therefore, despite the common objective to decrease ‘wasteful’ energy consumption, considerable variation in energy efficiency between the EU member states is established. Furthermore, an attempt is made to evaluate the impact of energy-efficiency measures undertaken in the EU residential sector by introducing an additional set of variables into the model and the results suggest that financial incentives and energy performance standards play an important role in promoting energy efficiency improvements, whereas informative measures do not have a significant impact. - Highlights: • The level of energy efficiency of the EU residential sector is estimated. • Considerable potential for energy savings from reduced inefficiency is established. • The impact of introduced energy-efficiency policy measures is also evaluated. • Financial incentives are found to promote energy efficiency improvements. • Energy performance standards also play an important role
Comparing proxy and model estimates of hydroclimate variability and change over the Common Era
Hydro2k Consortium, Pages
2017-12-01
Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform
Time and space variability of spectral estimates of atmospheric pressure
Canavero, Flavio G.; Einaudi, Franco
1987-01-01
The temporal and spatial behaviors of atmospheric pressure spectra over the northern Italy and the Alpine massif were analyzed using data on surface pressure measurements carried out at two microbarograph stations in the Po Valley, one 50 km south of the Alps, the other in the foothills of the Dolomites. The first 15 days of the study overlapped with the Alpex Intensive Observation Period. The pressure records were found to be intrinsically nonstationary and were found to display substantial time variability, implying that the statistical moments depend on time. The shape and the energy content of spectra depended on different time segments. In addition, important differences existed between spectra obtained at the two stations, indicating a substantial effect of topography, particularly for periods less than 40 min.
Assessing Mucoadhesion in Polymer Gels: The Effect of Method Type and Instrument Variables
Directory of Open Access Journals (Sweden)
Jéssica Bassi da Silva
2018-03-01
Full Text Available The process of mucoadhesion has been widely studied using a wide variety of methods, which are influenced by instrumental variables and experiment design, making the comparison between the results of different studies difficult. The aim of this work was to standardize the conditions of the detachment test and the rheological methods of mucoadhesion assessment for semisolids, and introduce a texture profile analysis (TPA method. A factorial design was developed to suggest standard conditions for performing the detachment force method. To evaluate the method, binary polymeric systems were prepared containing poloxamer 407 and Carbopol 971P®, Carbopol 974P®, or Noveon® Polycarbophil. The mucoadhesion of systems was evaluated, and the reproducibility of these measurements investigated. This detachment force method was demonstrated to be reproduceable, and gave different adhesion when mucin disk or ex vivo oral mucosa was used. The factorial design demonstrated that all evaluated parameters had an effect on measurements of mucoadhesive force, but the same was not observed for the work of adhesion. It was suggested that the work of adhesion is a more appropriate metric for evaluating mucoadhesion. Oscillatory rheology was more capable of investigating adhesive interactions than flow rheology. TPA method was demonstrated to be reproducible and can evaluate the adhesiveness interaction parameter. This investigation demonstrates the need for standardized methods to evaluate mucoadhesion and makes suggestions for a standard study design.
Energy Technology Data Exchange (ETDEWEB)
Miller, N. J.; Marriage, T. A.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Harrington, K.; Rostem, K.; Watts, D. J. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Chuss, D. T. [Department of Physics, Villanova University, 800 E Lancaster, Villanova, PA 19085 (United States); Wollack, E. J.; Fixsen, D. J.; Moseley, S. H.; Switzer, E. R., E-mail: Nathan.J.Miller@nasa.gov [Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
2016-02-20
Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.
Center of gravity estimation using a reaction board instrumented with fiber Bragg gratings
Oliveira, Rui; Roriz, Paulo; Marques, Manuel B.; Frazão, Orlando
2018-03-01
The purpose of the present work is to construct a reaction board based on fiber Bragg gratings (FBGs) that could be used for estimation of the 2D coordinates of the projection of center of gravity (CG) of an object. The apparatus is consisted of a rigid equilateral triangular board mounted on three supports at the vertices, two of which have cantilevers instrumented with FBGs. When an object of known weight is placed on the board, the bending strain of the cantilevers is measured by a proportional wavelength shift of the FBGs. Applying the equilibrium conditions of a rigid body and proper calibration procedures, the wavelength shift is used to estimate the vertical reaction forces and moments of force at the supports and the coordinates of the object's CG projection on the board. This method can be used on a regular basis to estimate the CG of the human body or objects with complex geometry and density distribution. An example is provided for the estimation of the CG projection coordinates of two orthopaedic femur bone models, one intact, and the other with a hip stem implant encased. The clinical implications of changing the normal CG location by means of a prosthesis have been discussed.
Variable disparity-motion estimation based fast three-view video coding
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
A new virtual instrument for estimating punch velocity in combat sports.
Urbinati, K S; Scheeren, E; Nohama, P
2013-01-01
For improving the performance in combat sport, especially percussion, it is necessary achieving high velocity in punches and kicks. The aim of this study was to evaluate the applicability of 3D accelerometry in a Virtual Instrumentation System (VIS) designed for estimating punch velocity in combat sports. It was conducted in two phases: (1) integration of the 3D accelerometer with the communication interface and software for processing and visualization, and (2) applicability of the system. Fifteen karate athletes performed five gyaku zuki type punches (with reverse leg) using the accelerometer on the 3rd metacarpal on the back of the hand. It was performed nonparametric Mann-Whitney U-test to determine differences in the mean linear velocity among three punches performed sequentially (p sport.
A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem
Delaigle, Aurore
2009-03-01
Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions.
Validity of Two New Brief Instruments to Estimate Vegetable Intake in Adults
Directory of Open Access Journals (Sweden)
Janine Wright
2015-08-01
Full Text Available Cost effective population-based monitoring tools are needed for nutritional surveillance and interventions. The aim was to evaluate the relative validity of two new brief instruments (three item: VEG3 and five item: VEG5 for estimating usual total vegetable intake in comparison to a 7-day dietary record (7DDR. Sixty-four Australian adult volunteers aged 30 to 69 years (30 males, mean age ± SD 56.3 ± 9.2 years and 34 female mean age ± SD 55.3 ± 10.0 years. Pearson correlations between 7DDR and VEG3 and VEG5 were modest, at 0.50 and 0.56, respectively. VEG3 significantly (p < 0.001 underestimated mean vegetable intake compared to 7DDR measures (2.9 ± 1.3 vs. 3.6 ± 1.6 serves/day, respectively, whereas mean vegetable intake assessed by VEG5 did not differ from 7DDR measures (3.3 ± 1.5 vs. 3.6 ± 1.6 serves/day. VEG5 was also able to correctly identify 95%, 88% and 75% of those subjects not consuming five, four and three serves/day of vegetables according to their 7DDR classification. VEG5, but not VEG3, can estimate usual total vegetable intake of population groups and had superior performance to VEG3 in identifying those not meeting different levels of vegetable intake. VEG5, a brief instrument, shows measurement characteristics useful for population-based monitoring and intervention targeting.
Stable Graphical Model Estimation with Random Forests for Discrete, Continuous, and Mixed Variables
Fellinghauer, Bernd; Bühlmann, Peter; Ryffel, Martin; von Rhein, Michael; Reinhardt, Jan D.
2011-01-01
A conditional independence graph is a concise representation of pairwise conditional independence among many variables. Graphical Random Forests (GRaFo) are a novel method for estimating pairwise conditional independence relationships among mixed-type, i.e. continuous and discrete, variables. The number of edges is a tuning parameter in any graphical model estimator and there is no obvious number that constitutes a good choice. Stability Selection helps choosing this parameter with respect to...
Directory of Open Access Journals (Sweden)
Buckley Norman
2010-10-01
Full Text Available Abstract Background The Internet is used increasingly by providers as a tool for disseminating pain-related health information and by patients as a resource about health conditions and treatment options. However, health information on the Internet remains unregulated and varies in quality, accuracy and readability. The objective of this study was to determine the quality of pain websites, and explain variability in quality and readability between pain websites. Methods Five key terms (pain, chronic pain, back pain, arthritis, and fibromyalgia were entered into the Google, Yahoo and MSN search engines. Websites were assessed using the DISCERN instrument as a quality index. Grade level readability ratings were assessed using the Flesch-Kincaid Readability Algorithm. Univariate (using alpha = 0.20 and multivariable regression (using alpha = 0.05 analyses were used to explain the variability in DISCERN scores and grade level readability using potential for commercial gain, health related seals of approval, language(s and multimedia features as independent variables. Results A total of 300 websites were assessed, 21 excluded in accordance with the exclusion criteria and 110 duplicate websites, leaving 161 unique sites. About 6.8% (11/161 websites of the websites offered patients' commercial products for their pain condition, 36.0% (58/161 websites had a health related seal of approval, 75.8% (122/161 websites presented information in English only and 40.4% (65/161 websites offered an interactive multimedia experience. In assessing the quality of the unique websites, of a maximum score of 80, the overall average DISCERN Score was 55.9 (13.6 and readability (grade level of 10.9 (3.9. The multivariable regressions demonstrated that website seals of approval (P = 0.015 and potential for commercial gain (P = 0.189 were contributing factors to higher DISCERN scores, while seals of approval (P = 0.168 and interactive multimedia (P = 0.244 contributed to
Ferrari, G.; Kozarski, M.; Gu, Y. J.; De Lazzari, C.; Di Molfetta, A.; Palko, K. J.; Zielinski, K.; Gorczynska, K.; Darowski, M.; Rakhorst, G.
2008-01-01
Purpose: Application of a comprehensive, user-friendly, digital computer circulatory model to estimate hemodynamic and ventricular variables. Methods: The closed-loop lumped parameter circulatory model represents the circulation at the level of large vessels. A variable elastance model reproduces
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Estimating dew formation in rice, using seasonally averaged diel patterns of weather variables
Luo, W.; Goudriaan, J.
2004-01-01
If dew formation cannot be measured it has to be estimated. Available simulation models for estimating dew formation require hourly weather data as input. However, such data are not available for places without an automatic weather station. In such cases the diel pattern of weather variables might
Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia
2016-01-01
Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (monitoring may be beneficial.
Estimating structural equation models with non-normal variables by using transformations
Montfort, van K.; Mooijaart, A.; Meijerink, F.
2009-01-01
We discuss structural equation models for non-normal variables. In this situation the maximum likelihood and the generalized least-squares estimates of the model parameters can give incorrect estimates of the standard errors and the associated goodness-of-fit chi-squared statistics. If the sample
Linear solvation energy relationships: "rule of thumb" for estimation of variable values
Hickey, James P.; Passino-Reader, Dora R.
1991-01-01
For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.
An automated performance budget estimator: a process for use in instrumentation
Laporte, Philippe; Schnetler, Hermine; Rees, Phil
2016-08-01
Current day astronomy projects continue to increase in size and are increasingly becoming more complex, regardless of the wavelength domain, while risks in terms of safety, cost and operability have to be reduced to ensure an affordable total cost of ownership. All of these drivers have to be considered carefully during the development process of an astronomy project at the same time as there is a big drive to shorten the development life-cycle. From the systems engineering point of view, this evolution is a significant challenge. Big instruments imply management of interfaces within large consortia and dealing with tight design phase schedules which necessitate efficient and rapid interactions between all the stakeholders to firstly ensure that the system is defined correctly and secondly that the designs will meet all the requirements. It is essential that team members respond quickly such that the time available for the design team is maximised. In this context, performance prediction tools can be very helpful during the concept phase of a project to help selecting the best design solution. In the first section of this paper we present the development of such a prediction tool that can be used by the system engineer to determine the overall performance of the system and to evaluate the impact on the science based on the proposed design. This tool can also be used in "what-if" design analysis to assess the impact on the overall performance of the system based on the simulated numbers calculated by the automated system performance prediction tool. Having such a tool available from the beginning of a project can allow firstly for a faster turn-around between the design engineers and the systems engineer and secondly, between the systems engineer and the instrument scientist. Following the first section we described the process for constructing a performance estimator tool, followed by describing three projects in which such a tool has been utilised to illustrate
Damé, Luc; Bolsée, David; Meftah, Mustapha; Irbah, Abdenour; Hauchecorne, Alain; Bekki, Slimane; Pereira, Nuno; Cessateur, Marchand; Gäel; , Marion; et al.
2016-10-01
Accurate measurements of Solar Spectral Irradiance (SSI) are of primary importance for a better understanding of solar physics and of the impact of solar variability on climate (via Earth's atmospheric photochemistry). The acquisition of a top of atmosphere reference solar spectrum and of its temporal and spectral variability during the unusual solar cycle 24 is of prime interest for these studies. These measurements are performed since April 2008 with the SOLSPEC spectro-radiometer from the far ultraviolet to the infrared (166 nm to 3088 nm). This instrument, developed under a fruitful LATMOS/BIRA-IASB collaboration, is part of the Solar Monitoring Observatory (SOLAR) payload, externally mounted on the Columbus module of the International Space Station (ISS). The SOLAR mission, with its actual 8 years duration, will cover almost the entire solar cycle 24. We present here the in-flight operations and performances of the SOLSPEC instrument, including the engineering corrections, calibrations and improved know-how procedure for aging corrections. Accordingly, a SSI reference spectrum from the UV to the NIR will be presented, together with its variability in the UV, as measured by SOLAR/SOLSPEC for 8 years. Uncertainties on these measurements and comparisons with other instruments will be briefly discussed.
Creel, Scott; Creel, Michael
2009-11-01
1. Sampling error in annual estimates of population size creates two widely recognized problems for the analysis of population growth. First, if sampling error is mistakenly treated as process error, one obtains inflated estimates of the variation in true population trajectories (Staples, Taper & Dennis 2004). Second, treating sampling error as process error is thought to overestimate the importance of density dependence in population growth (Viljugrein et al. 2005; Dennis et al. 2006). 2. In ecology, state-space models are used to account for sampling error when estimating the effects of density and other variables on population growth (Staples et al. 2004; Dennis et al. 2006). In econometrics, regression with instrumental variables is a well-established method that addresses the problem of correlation between regressors and the error term, but requires fewer assumptions than state-space models (Davidson & MacKinnon 1993; Cameron & Trivedi 2005). 3. We used instrumental variables to account for sampling error and fit a generalized linear model to 472 annual observations of population size for 35 Elk Management Units in Montana, from 1928 to 2004. We compared this model with state-space models fit with the likelihood function of Dennis et al. (2006). We discuss the general advantages and disadvantages of each method. Briefly, regression with instrumental variables is valid with fewer distributional assumptions, but state-space models are more efficient when their distributional assumptions are met. 4. Both methods found that population growth was negatively related to population density and winter snow accumulation. Summer rainfall and wolf (Canis lupus) presence had much weaker effects on elk (Cervus elaphus) dynamics [though limitation by wolves is strong in some elk populations with well-established wolf populations (Creel et al. 2007; Creel & Christianson 2008)]. 5. Coupled with predictions for Montana from global and regional climate models, our results
SECOND ORDER LEAST SQUARE ESTIMATION ON ARCH(1 MODEL WITH BOX-COX TRANSFORMED DEPENDENT VARIABLE
Directory of Open Access Journals (Sweden)
Herni Utami
2014-03-01
Full Text Available Box-Cox transformation is often used to reduce heterogeneity and to achieve a symmetric distribution of response variable. In this paper, we estimate the parameters of Box-Cox transformed ARCH(1 model using second-order leastsquare method and then we study the consistency and asymptotic normality for second-order least square (SLS estimators. The SLS estimation was introduced byWang (2003, 2004 to estimate the parameters of nonlinear regression models with independent and identically distributed errors
A method of estimating GPS instrumental biases with a convolution algorithm
Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua
2018-03-01
This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.
Estimation of Finite Population Ratio When Other Auxiliary Variables are Available in the Study
Directory of Open Access Journals (Sweden)
Jehad Al-Jararha
2014-12-01
Full Text Available The estimation of the population total $t_y,$ by using one or moreauxiliary variables, and the population ratio $\\theta_{xy}=t_y/t_x,$$t_x$ is the population total for the auxiliary variable $X$, for afinite population are heavily discussed in the literature. In thispaper, the idea of estimation the finite population ratio$\\theta_{xy}$ is extended to use the availability of auxiliaryvariable $Z$ in the study, such auxiliary variable is not used inthe definition of the population ratio. This idea may be supported by the fact that the variable $Z$ is highly correlated with the interest variable $Y$ than the correlation between the variables $X$ and $Y.$ The availability of such auxiliary variable can be used to improve the precision of the estimation of the population ratio. To our knowledge, this idea is not discussed in the literature. The bias, variance and the mean squares error are given for our approach. Simulation from real data set, the empirical relative bias and the empirical relative mean squares error are computed for our approach and different estimators proposed in the literature for estimating the population ratio $\\theta_{xy}.$ Analytically and the simulation results show that, by suitable choices, our approach gives negligible bias and has less mean squares error.
Estimating decadal variability in sea level from tide gauge records: An application to the North Sea
Frederikse, Thomas; Riva, R.E.M.; Slobbe, Cornelis; Broerse, D.B.T.; Verlaan, Martin
2016-01-01
One of the primary observational data sets of sea level is represented by the tide gauge record. We propose a new method to estimate variability on decadal time scales from tide gauge data by using a state space formulation, which couples the direct observations to a predefined state space model by using a Kalman filter. The model consists of a time-varying trend and seasonal cycle, and variability induced by several physical processes, such as wind, atmospheric pressure changes and teleconne...
Stochastic Optimal Estimation with Fuzzy Random Variables and Fuzzy Kalman Filtering
Institute of Scientific and Technical Information of China (English)
FENG Yu-hu
2005-01-01
By constructing a mean-square performance index in the case of fuzzy random variable, the optimal estimation theorem for unknown fuzzy state using the fuzzy observation data are given. The state and output of linear discrete-time dynamic fuzzy system with Gaussian noise are Gaussian fuzzy random variable sequences. An approach to fuzzy Kalman filtering is discussed. Fuzzy Kalman filtering contains two parts: a real-valued non-random recurrence equation and the standard Kalman filtering.
Amalia Novoa Hoyos; Mauricio Sabogal Salamanca; Camilo Vargas Walteros
2016-01-01
This article shows a first estimate about the relationship between investment in digital media and some financial variables in Colombia. First, a literature review is made about the impact of marketing and digital marketing in Company performance. Then, an analysis of the sectorial variables such as liquidity, profitability, indebtedness and concentration in sectors like food, personal grooming, automotive, drinking and tobacco, construction, entertainment, furniture, services, telecommunicat...
Accuracy of latent-variable estimation in Bayesian semi-supervised learning.
Yamazaki, Keisuke
2015-09-01
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
2008-01-01
During a period of five years, an international group of soil water instrumentation experts were contracted by the International Atomic Energy Agency to carry out a range of comparative assessments of soil water sensing methods under laboratory and field conditions. The detailed results of those studies are published elsewhere. Most of the devices examined worked well some of the time, but most also performed poorly in some circumstances. The group was also aware that the choice of a water measurement technology is often made for economic, convenience and other reasons, and that there was a need to be able to obtain the best results from any device used. The choice of a technology is sometimes not made by the ultimate user, or even if it is, the main constraint may be financial rather than technical. Thus, this guide is presented in a way that allows the user to obtain the best performance from any instrument, while also providing guidance as to which instruments perform best under given circumstances. That said, this expert group of the IAEA reached several important conclusions: (1) the field calibrated neutron moisture meter (NMM) remains the most accurate and precise method for soil profile water content determination in the field, and is the only indirect method capable of providing accurate soil water balance data for studies of crop water use, water use efficiency, irrigation efficiency and irrigation water use efficiency, with a minimum number of access tubes; (2) those electromagnetic sensors known as capacitance sensors exhibit much more variability in the field than either the NMM or direct soil water measurements, and they are not recommended for soil water balance studies for this reason (impractically large numbers of access tubes and sensors are required) and because they are rendered inaccurate by changes in soil bulk electrical conductivity (including temperature effects) that often occur in irrigated soils, particularly those containing
Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables
International Nuclear Information System (INIS)
Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter
2010-01-01
We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.
Directory of Open Access Journals (Sweden)
Lara Gitto
2015-08-01
Full Text Available Background Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people’s quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression, might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Methods Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females, aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status and socio
Gitto, Lara; Noh, Yong-Hwan; Andrés, Antonio Rodríguez
2015-04-16
Depression is a mental health state whose frequency has been increasing in modern societies. It imposes a great burden, because of the strong impact on people's quality of life and happiness. Depression can be reliably diagnosed and treated in primary care: if more people could get effective treatments earlier, the costs related to depression would be reversed. The aim of this study was to examine the influence of socio-economic factors and gender on depressed mood, focusing on Korea. In fact, in spite of the great amount of empirical studies carried out for other countries, few epidemiological studies have examined the socio-economic determinants of depression in Korea and they were either limited to samples of employed women or did not control for individual health status. Moreover, as the likely data endogeneity (i.e. the possibility of correlation between the dependent variable and the error term as a result of autocorrelation or simultaneity, such as, in this case, the depressed mood due to health factors that, in turn might be caused by depression), might bias the results, the present study proposes an empirical approach, based on instrumental variables, to deal with this problem. Data for the year 2008 from the Korea National Health and Nutrition Examination Survey (KNHANES) were employed. About seven thousands of people (N= 6,751, of which 43% were males and 57% females), aged from 19 to 75 years old, were included in the sample considered in the analysis. In order to take into account the possible endogeneity of some explanatory variables, two Instrumental Variables Probit (IVP) regressions were estimated; the variables for which instrumental equations were estimated were related to the participation of women to the workforce and to good health, as reported by people in the sample. Explanatory variables were related to age, gender, family factors (such as the number of family members and marital status) and socio-economic factors (such as education
Pence, Brian Wells; Miller, William C.; Gaynes, Bradley N.
2009-01-01
Prevalence and validation studies rely on imperfect reference standard (RS) diagnostic instruments that can bias prevalence and test characteristic estimates. The authors illustrate 2 methods to account for RS misclassification. Latent class analysis (LCA) combines information from multiple imperfect measures of an unmeasurable latent condition to…
Riek, Markus; Boehme, Rainer; Ciere, M.; Hernandez Ganan, C.; van Eeten, M.J.G.
2016-01-01
While cybercrime has existed for many years and is still reported to be a growing problem, reliable estimates of the economic impacts are rare. We develop a survey instrument tailored to measure the costs of consumer-facing cybercrime systematically, by aggregating different cost factors into direct
Haller, Bernhard; Ulm, Kurt
2018-02-20
To individualize treatment decisions based on patient characteristics, identification of an interaction between a biomarker and treatment is necessary. Often such potential interactions are analysed using data from randomized clinical trials intended for comparison of two treatments. Tests of interactions are often lacking statistical power and we investigated if and how a consideration of further prognostic variables can improve power and decrease the bias of estimated biomarker-treatment interactions in randomized clinical trials with time-to-event outcomes. A simulation study was performed to assess how prognostic factors affect the estimate of the biomarker-treatment interaction for a time-to-event outcome, when different approaches, like ignoring other prognostic factors, including all available covariates or using variable selection strategies, are applied. Different scenarios regarding the proportion of censored observations, the correlation structure between the covariate of interest and further potential prognostic variables, and the strength of the interaction were considered. The simulation study revealed that in a regression model for estimating a biomarker-treatment interaction, the probability of detecting a biomarker-treatment interaction can be increased by including prognostic variables that are associated with the outcome, and that the interaction estimate is biased when relevant prognostic variables are not considered. However, the probability of a false-positive finding increases if too many potential predictors are included or if variable selection is performed inadequately. We recommend undertaking an adequate literature search before data analysis to derive information about potential prognostic variables and to gain power for detecting true interaction effects and pre-specifying analyses to avoid selective reporting and increased false-positive rates.
International Nuclear Information System (INIS)
Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.
2016-01-01
Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.
Estimating decadal variability in sea level from tide gauge records: An application to the North Sea
Frederikse, Thomas; Riva, R.E.M.; Slobbe, Cornelis; Broerse, D.B.T.; Verlaan, Martin
2016-01-01
One of the primary observational data sets of sea level is represented by the tide gauge record. We propose a new method to estimate variability on decadal time scales from tide gauge data by using a state space formulation, which couples the direct observations to a predefined state space model by
Frederikse, T.; Riva, R.E.M.; Slobbe, D.C.; Broerse, D.B.T.; Verlaan, M.
2016-01-01
One of the primary observational data sets of sea level is represented by the tide gauge record. We propose a new method to estimate variability on decadal time scales from tide gauge data by using a state space formulation, which couples the direct observations to a predefined state space model
Estimation of genetic variability level in inbred CF1 mouse lines ...
Indian Academy of Sciences (India)
To estimate the genetic variability levels maintained by inbred lines selected for body weight and to compare them with a nonselected population from which the lines were derived, we calculated the per cent polymorphic loci (P) and marker diversity (MD) index from data on 43 putative loci of inter simple sequence repeats ...
Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.
2018-01-01
Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…
Boeren, F.A.J.; Bruijnen, D.J.H.; Oomen, T.A.E.
2017-01-01
Feedforward control enables high performance of a motion system. Recently, algorithms have been proposed that eliminate bias errors in tuning the parameters of a feedforward controller. The aim of this paper is to develop a new algorithm that combines unbiased parameter estimates with optimal
Introducing instrumental variables in the LS-SVM based identification framework
Laurain, V.; Zheng, W-X.; Toth, R.
2011-01-01
Least-Squares Support Vector Machines (LS-SVM) represent a promising approach to identify nonlinear systems via nonparametric estimation of the nonlinearities in a computationally and stochastically attractive way. All the methods dedicated to the solution of this problem rely on the minimization of
Duda, David P.; Stephens, Graeme L.; Cox, Stephen K.
1990-01-01
Measurements of longwave and shortwave radiation were made using an instrument package on the NASA tethered balloon during the FIRE Marine Stratocumulus experiment. Radiation data from two pairs of pyranometers were used to obtain vertical profiles of the near-infrared and total solar fluxes through the boundary layer, while a pair of pyrgeometers supplied measurements of the longwave fluxes in the cloud layer. The radiation observations were analyzed to determine heating rates and to measure the radiative energy budget inside the stratocumulus clouds during several tethered balloon flights. The radiation fields in the cloud layer were also simulated by a two-stream radiative transfer model, which used cloud optical properties derived from microphysical measurements and Mie scattering theory.
Development of Instrumentation for Direct Validation of Regional Carbon Flux Estimates
National Aeronautics and Space Administration — We are pursuing three tasks under internal research and development: 1) procure a state-of-the-art, commercial instrument for measuring atmospheric methane (CH4) in...
Directory of Open Access Journals (Sweden)
Dirk Temme
2008-12-01
Full Text Available Integrated choice and latent variable (ICLV models represent a promising new class of models which merge classic choice models with the structural equation approach (SEM for latent variables. Despite their conceptual appeal, applications of ICLV models in marketing remain rare. We extend previous ICLV applications by first estimating a multinomial choice model and, second, by estimating hierarchical relations between latent variables. An empirical study on travel mode choice clearly demonstrates the value of ICLV models to enhance the understanding of choice processes. In addition to the usually studied directly observable variables such as travel time, we show how abstract motivations such as power and hedonism as well as attitudes such as a desire for flexibility impact on travel mode choice. Furthermore, we show that it is possible to estimate such a complex ICLV model with the widely available structural equation modeling package Mplus. This finding is likely to encourage more widespread application of this appealing model class in the marketing field.
International Nuclear Information System (INIS)
Hoffman, F.O.; Gardner, R.H.; Eckerman, K.F.
1982-06-01
Dose predictions for the ingestion of 90 Sr and 137 Cs, using aquatic and terrestrial food chain transport models similar to those in the Nuclear Regulatory Commission's Regulatory Guide 1.109, are evaluated through estimating the variability of model parameters and determining the effect of this variability on model output. The variability in the predicted dose equivalent is determined using analytical and numerical procedures. In addition, a detailed discussion is included on 90 Sr dosimetry. The overall estimates of uncertainty are most relevant to conditions where site-specific data is unavailable and when model structure and parameter estimates are unbiased. Based on the comparisons performed in this report, it is concluded that the use of the generic default parameters in Regulatory Guide 1.109 will usually produce conservative dose estimates that exceed the 90th percentile of the predicted distribution of dose equivalents. An exception is the meat pathway for 137 Cs, in which use of generic default values results in a dose estimate at the 24th percentile. Among the terrestrial pathways of exposure, the non-leafy vegetable pathway is the most important for 90 Sr. For 90 Sr, the parameters for soil retention, soil-to-plant transfer, and internal dosimetry contribute most significantly to the variability in the predicted dose for the combined exposure to all terrestrial pathways. For 137 Cs, the meat transfer coefficient the mass interception factor for pasture forage, and the ingestion dose factor are the most important parameters. The freshwater finfish bioaccumulation factor is the most important parameter for the dose prediction of 90 Sr and 137 Cs transported over the water-fish-man pathway
A variable stiffness mechanism for steerable percutaneous instruments: integration in a needle.
De Falco, Iris; Culmone, Costanza; Menciassi, Arianna; Dankelman, Jenny; van den Dobbelsteen, John J
2018-06-04
Needles are advanced tools commonly used in minimally invasive medical procedures. The accurate manoeuvrability of flexible needles through soft tissues is strongly determined by variations in tissue stiffness, which affects the needle-tissue interaction and thus causes needle deflection. This work presents a variable stiffness mechanism for percutaneous needles capable of compensating for variations in tissue stiffness and undesirable trajectory changes. It is composed of compliant segments and rigid plates alternately connected in series and longitudinally crossed by four cables. The tensioning of the cables allows the omnidirectional steering of the tip and the stiffness tuning of the needle. The mechanism was tested separately under different working conditions, demonstrating a capability to exert up to 3.6 N. Afterwards, the mechanism was integrated into a needle, and the overall device was tested in gelatine phantoms simulating the stiffness of biological tissues. The needle demonstrated the capability to vary deflection (from 11.6 to 4.4 mm) and adapt to the inhomogeneity of the phantoms (from 21 to 80 kPa) depending on the activation of the variable stiffness mechanism. Graphical abstract ᅟ.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Tyre effective radius and vehicle velocity estimation: a variable structure observer solution
International Nuclear Information System (INIS)
El Tannoury, C.; Plestan, F.; Moussaoui, S.; ROMANi, N. RENAULT
2011-01-01
This paper proposes an application of a variable structure observer for wheel effective radius and velocity of automotive vehicles. This observer is based on high order sliding approach allowing robustness and finite time convergence. Its originality consists in assuming a nonlinear relation between the slip ratio and the friction coefficient and providing an estimation of both variables, wheel radius and vehicle velocity, from measurement of wheel angular velocity and torque. These signals being available on major modern vehicle CAN (Controller Area Network) buses, this system does not require additional sensors. A simulation example is given to illustrate the relevance of this approach.
Directory of Open Access Journals (Sweden)
Amalia Novoa Hoyos
2016-06-01
Full Text Available This article shows a first estimate about the relationship between investment in digital media and some financial variables in Colombia. First, a literature review is made about the impact of marketing and digital marketing in Company performance. Then, an analysis of the sectorial variables such as liquidity, profitability, indebtedness and concentration in sectors like food, personal grooming, automotive, drinking and tobacco, construction, entertainment, furniture, services, telecommunication, tourism and clothing using the technique of ordinary squared minimums (OSM in the years 2011, 2012, 2013 and 2014. For this study, investment in digital media in the above- mentioned years is also taken into account.
Hoogerheide, L.F.; Kaashoek, J.F.; van Dijk, H.K.
2007-01-01
Likelihoods and posteriors of instrumental variable (IV) regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating posterior
L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)
2005-01-01
textabstractLikelihoods and posteriors of instrumental variable regression models with strong endogeneity and/or weak instruments may exhibit rather non-elliptical contours in the parameter space. This may seriously affect inference based on Bayesian credible sets. When approximating such contours
Sleep Quality Estimation based on Chaos Analysis for Heart Rate Variability
Fukuda, Toshio; Wakuda, Yuki; Hasegawa, Yasuhisa; Arai, Fumihito; Kawaguchi, Mitsuo; Noda, Akiko
In this paper, we propose an algorithm to estimate sleep quality based on a heart rate variability using chaos analysis. Polysomnography(PSG) is a conventional and reliable system to diagnose sleep disorder and to evaluate its severity and therapeatic effect, by estimating sleep quality based on multiple channels. However, a recording process requires a lot of time and a controlled environment for measurement and then an analyzing process of PSG data is hard work because the huge sensed data should be manually evaluated. On the other hand, it is focused that some people make a mistake or cause an accident due to lost of regular sleep and of homeostasis these days. Therefore a simple home system for checking own sleep is required and then the estimation algorithm for the system should be developed. Therefore we propose an algorithm to estimate sleep quality based only on a heart rate variability which can be measured by a simple sensor such as a pressure sensor and an infrared sensor in an uncontrolled environment, by experimentally finding the relationship between chaos indices and sleep quality. The system including the estimation algorithm can inform patterns and quality of own daily sleep to a user, and then the user can previously arranges his life schedule, pays more attention based on sleep results and consult with a doctor.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2001-07-01
Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.
Parameter estimation of variable-parameter nonlinear Muskingum model using excel solver
Kang, Ling; Zhou, Liwei
2018-02-01
Abstract . The Muskingum model is an effective flood routing technology in hydrology and water resources Engineering. With the development of optimization technology, more and more variable-parameter Muskingum models were presented to improve effectiveness of the Muskingum model in recent decades. A variable-parameter nonlinear Muskingum model (NVPNLMM) was proposed in this paper. According to the results of two real and frequently-used case studies by various models, the NVPNLMM could obtain better values of evaluation criteria, which are used to describe the superiority of the estimated outflows and compare the accuracies of flood routing using various models, and the optimal estimated outflows by the NVPNLMM were closer to the observed outflows than the ones by other models.
Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.
2011-12-01
Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of
International Nuclear Information System (INIS)
Turtos, L.; Sanchez, M.; Roque, A.; Soltura, R.
2003-01-01
Methodology for estimation of secondary meteorological variables to be used in local dispersion of air pollutants. This paper include the main works, carried out into the frame of the project Atmospheric environmental externalities of the electricity generation in Cuba, aiming to develop methodologies and corresponding software, which will allow to improve the quality of the secondary meteorological data used in atmospheric pollutant calculations; specifically the wind profiles coefficient, urban and rural mixed high and temperature gradients
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON
Directory of Open Access Journals (Sweden)
Gabriel da Silva Oliveira
2015-12-01
Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.
Directory of Open Access Journals (Sweden)
Rafdzah Zaki
2013-06-01
Full Text Available Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
Jones, Adam G
2015-11-01
Bateman's principles continue to play a major role in the characterization of genetic mating systems in natural populations. The modern manifestations of Bateman's ideas include the opportunity for sexual selection (i.e. I(s) - the variance in relative mating success), the opportunity for selection (i.e. I - the variance in relative reproductive success) and the Bateman gradient (i.e. β(ss) - the slope of the least-squares regression of reproductive success on mating success). These variables serve as the foundation for one convenient approach for the quantification of mating systems. However, their estimation presents at least two challenges, which I address here with a new Windows-based computer software package called BATEMANATER. The first challenge is that confidence intervals for these variables are not easy to calculate. BATEMANATER solves this problem using a bootstrapping approach. The second, more serious, problem is that direct estimates of mating system variables from open populations will typically be biased if some potential progeny or adults are missing from the analysed sample. BATEMANATER addresses this problem using a maximum-likelihood approach to estimate mating system variables from incompletely sampled breeding populations. The current version of BATEMANATER addresses the problem for systems in which progeny can be collected in groups of half- or full-siblings, as would occur when eggs are laid in discrete masses or offspring occur in pregnant females. BATEMANATER has a user-friendly graphical interface and thus represents a new, convenient tool for the characterization and comparison of genetic mating systems. © 2015 John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Oleksandr Makeyev
2016-06-01
Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.
Makeyev, Oleksandr; Besio, Walter G.
2016-01-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933
International Nuclear Information System (INIS)
Schwarz, G.; Dunning, D.E. Jr.
1982-01-01
An attempt has been made to quantify the variability in human biological parameters determining dose to man from ingestion of a unit activity of soluble 137 Cs and the resulting imprecision in the predicted total-body dose commitment. The analysis is based on an extensive review of the literature along with the application of statistical methods to determine parameter variability, correlations between parameters, and predictive imprecision. The variability in the principal biological parameters (biological half-time and total-body mass) involved can be described by a geometric standard deviation of 1.2-1.5 for adults and 1.6-1.9 for children/ adolescents of age 0.1-18 yr. The estimated predictive imprecision (using a Monte Carlo technique) in the total-body dose commitment from ingested 137 Cs can be described by a geometric standard deviation on the order of 1.3-1.4, meaning that the 99th percentile of the predicted distribution of dose is within approximately 2.1 times the mean value. The mean dose estimate is 0.009 Sv/MBq (34 mrem/μ Ci) for children/adolescents and 0.01 Sv/MBq (38 mrem/μ Ci) for adults. Little evidence of age dependence in the total-body dose from ingested 137 Cs is observed. (author)
Habibov, Nazim; Cheung, Alex; Auchynnikava, Alena
2017-09-01
The purpose of this paper is to investigate the effect of social trust on the willingness to pay more taxes to improve public healthcare in post-communist countries. The well-documented association between higher levels of social trust and better health has traditionally been assumed to reflect the notion that social trust is positively associated with support for public healthcare system through its encouragement of cooperative behaviour, social cohesion, social solidarity, and collective action. Hence, in this paper, we have explicitly tested the notion that social trust contributes to an increase in willingness to financially support public healthcare. We use micro data from the 2010 Life-in-Transition survey (N = 29,526). Classic binomial probit and instrumental variables ivprobit regressions are estimated to model the relationship between social trust and paying more taxes to improve public healthcare. We found that an increase in social trust is associated with a greater willingness to pay more taxes to improve public healthcare. From the perspective of policy-making, healthcare administrators, policy-makers, and international donors should be aware that social trust is an important factor in determining the willingness of the population to provide much-needed financial resources to supporting public healthcare. From a theoretical perspective, we found that estimating the effect of trust on support for healthcare without taking confounding and measurement error problems into consideration will likely lead to an underestimation of the true effect of trust. Copyright © 2017 Elsevier Ltd. All rights reserved.
Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H
2016-10-01
The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m 2 ): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value
FEATURES OF AN ESTIMATION OF INVESTMENT PROJECTS AT THE ENTERPRISES OF AVIATION INSTRUMENT
Directory of Open Access Journals (Sweden)
Petr P. Dobrov
2016-01-01
Full Text Available The relevance of this study due to the fact that the current situation in Russia is complemented by the negative effects of market reforms in the economy and economic sanctions adopted against our country and in particular the different level companies. In view of this, to effectively manage the activities and the development of aviation instrument companies and enterprises of different ownership forms are highly relevant issues related to the assessment of investment projects. The general crisis that engulfed almost all industry in Russia, demanded the application of a new ideology of the organization and management of investment projects, as well as their assessment at the enterprises of aviation instrument. In Russia, began a new stage in the development of project management establishment of a domestic methodology, complex tools and training for professional project management on the basis of domestic achievements, global experience and creativity of its processing based on the actual conditions of our country. The need for the use of project management methodology in Russia is determined by two factors: the increasing complexity of projects and the organizations that operate them, and the fact that project management is widely used in countries with market economies. Projects at the enterprises of aviation instrument making and evaluation are characterized by complexity and uncertainty, a significant dependence on the dynamic environment, including socio-economic, political, financial, economic, legislative influence of both the state and competing companies. In this paper, a study of modern methods of evaluating investment projects at the enterprises of aviation instrument. Methodology. The methodological basis of this paper appeared comparative and economic-mathematical analysis methods. Results. As part of the presentation of the present article the author, it was found that the activity of modern companies is not linear and is
Variability in abundance of temperate reef fishes estimated by visual census.
Directory of Open Access Journals (Sweden)
Alejo J Irigoyen
Full Text Available Identifying sources of sampling variation and quantifying their magnitude is critical to the interpretation of ecological field data. Yet, most monitoring programs of reef fish populations based on underwater visual censuses (UVC consider only a few of the factors that may influence fish counts, such as the diver or census methodology. Recent studies, however, have drawn attention to a broader range of processes that introduce variability at different temporal scales. This study analyzes the magnitude of different sources of variation in UVCs of temperate reef fishes off Patagonia (Argentina. The variability associated with time-of-day, tidal state, and time elapsed between censuses (minutes, days, weeks and months was quantified for censuses conducted on the five most conspicuous and common species: Pinguipes brasilianus, Pseudopercis semifasciata, Sebastes oculatus, Acanthistius patachonicus and Nemadactylus bergi. Variance components corresponding to spatial heterogeneity and to the different temporal scales were estimated using nested random models. The levels of variability estimated for the different species were related to their life history attributes and behavior. Neither time-of-day nor tidal state had a significant effect on counts, except for the influence of tide on P. brasilianus. Spatial heterogeneity was the dominant source of variance in all but one species. Among the temporal scales, the intra-annual variation was the highest component for most species due to marked seasonal fluctuations in abundance, followed by the weekly and the instantaneous variation; the daily component was not significant. The variability between censuses conducted at different tidal levels and time-of-day was similar in magnitude to the instantaneous variation, reinforcing the conclusion that stochastic variation at very short time scales is non-negligible and should be taken into account in the design of monitoring programs and experiments. The present
BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods
Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.
2017-12-01
Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made
Directory of Open Access Journals (Sweden)
Prashant K. Srivastava
2017-10-01
Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.
DEFF Research Database (Denmark)
Richardson, Katherine; Bo Pedersen, Flemming
1998-01-01
By coupling knowledge of oceanographic processes and phytoplankton responses to light and nutrient availability, we estimate a total potential new (sensu Dugdale and Goering,1967) production for the North Sea of approximately 15.6 million tons C per year. In a typical year, about 40......% of this production will be associated with the spring bloom in the surface waters of the seasonally stratified (central and northern) North Sea. About 40% is predicted to occur in the coastal waters while the remaining new production is predicted to take place in sub-surface chlorophyll peaks occuring in association...... with fronts in the North Sea during summer month. By considering the inter-annual variation in heat, wind and nutrient availability (light and tidal energy input are treated as non-varying from year to year), the inter-annual variability in the new production occuring in these different regions is estimated...
Di Nuovo, Alessandro G; Di Nuovo, Santo; Buono, Serafino
2012-02-01
The estimation of a person's intelligence quotient (IQ) by means of psychometric tests is indispensable in the application of psychological assessment to several fields. When complex tests as the Wechsler scales, which are the most commonly used and universally recognized parameter for the diagnosis of degrees of retardation, are not applicable, it is necessary to use other psycho-diagnostic tools more suited for the subject's specific condition. But to ensure a homogeneous diagnosis it is necessary to reach a common metric, thus, the aim of our work is to build models able to estimate accurately and reliably the Wechsler IQ, starting from different psycho-diagnostic tools. Four different psychometric tests (Leiter international performance scale; coloured progressive matrices test; the mental development scale; psycho educational profile), along with the Wechsler scale, were administered to a group of 40 mentally retarded subjects, with various pathologies, and control persons. The obtained database is used to evaluate Wechsler IQ estimation models starting from the scores obtained in the other tests. Five modelling methods, two statistical and three from machine learning, that belong to the family of artificial neural networks (ANNs) are employed to build the estimator. Several error metrics for estimated IQ and for retardation level classification are defined to compare the performance of the various models with univariate and multivariate analyses. Eight empirical studies show that, after ten-fold cross-validation, best average estimation error is of 3.37 IQ points and mental retardation level classification error of 7.5%. Furthermore our experiments prove the superior performance of ANN methods over statistical regression ones, because in all cases considered ANN models show the lowest estimation error (from 0.12 to 0.9 IQ points) and the lowest classification error (from 2.5% to 10%). Since the estimation performance is better than the confidence interval of
Instrumental variable analysis
Stel, Vianda S.; Dekker, Friedo W.; Zoccali, Carmine; Jager, Kitty J.
2013-01-01
The main advantage of the randomized controlled trial (RCT) is the random assignment of treatment that prevents selection by prognosis. Nevertheless, only few RCTs can be performed given their high cost and the difficulties in conducting such studies. Therefore, several analytical methods for
Alvine, Gregory F; Swain, James M; Asher, Marc A; Burton, Douglas C
2004-08-01
The controversy of burst fracture surgical management is addressed in this retrospective case study and literature review. The series consisted of 40 consecutive patients, index included, with 41 fractures treated with stiff, limited segment transpedicular bone-anchored instrumentation and arthrodesis from 1987 through 1994. No major acute complications such as death, paralysis, or infection occurred. For the 30 fractures with pre- and postoperative computed tomography studies, spinal canal compromise was 61% and 32%, respectively. Neurologic function improved in 7 of 14 patients (50%) and did not worsen in any. The principal problem encountered was screw breakage, which occurred in 16 of the 41 (39%) instrumented fractures. As we have previously reported, transpedicular anterior bone graft augmentation significantly decreased variable screw placement (VSP) implant breakage. However, it did not prevent Isola implant breakage in two-motion segment constructs. Compared with VSP, Isola provided better sagittal plane realignment and constructs that have been found to be significantly stiffer. Unplanned reoperation was necessary in 9 of the 40 patients (23%). At 1- and 2-year follow-up, 95% and 79% of patients were available for study, and a satisfactory outcome was achieved in 84% and 79%, respectively. These satisfaction and reoperation rates are consistent with the literature of the time. Based on these observations and the loads to which implant constructs are exposed following posterior realignment and stabilization of burst fractures, we recommend that three- or four-motion segment constructs, rather than two motion, be used. To save valuable motion segments, planned construct shortening can be used. An alternative is sequential or staged anterior corpectomy and structural grafting.
Estimating discharge using multi-level velocity data from acoustic doppler instruments
DEFF Research Database (Denmark)
Poulsen, Jane Bang; Rasmussen, Keld Rømer; Ovesen, Niels Bering
In the majority of Danish streams, weed growth affects the effective stream width and bed roughness thus imposes temporal variations on the stage-discharge relationship. Small stream-gradients and firm ecology based restrictions prevent that hydraulic structures are made at the discharge stations...... increases to more than 3 m. The Doppler instruments (Nortek) are placed on a vertical pole about 2 m off the right bank at three fixed elevations above the streambed (0.3, 0.6, and 1.3 m); the beams point horizontally towards the left bank perpendicularly to the average flow direction. At each depth......, the Doppler sensor records 10 minute average stream velocities in the central 10 m section of the stream. During summer periods with low flow, stream velocity has only been recorded at two depths since the water table drops below the uppermost sensor. A pressure transducer is also placed at the pole where...
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Brazdil, Rudolf
2016-04-01
Hydrological and meteorological extremes (HMEs) in Central Europe during the past 500 years can be reconstructed based on instrumental and documentary data. Documentary data about weather and related phenomena represent the basic source of information for historical climatology and hydrology, dealing with reconstruction of past climate and HMEs, their perception and impacts on human society. The paper presents the basic distribution of documentary data on (i) direct descriptions of HMEs and their proxies on the one hand and on (ii) individual and institutional data sources on the other. Several groups of documentary evidence such as narrative written records (annals, chronicles, memoirs), visual daily weather records, official and personal correspondence, special prints, financial and economic records (with particular attention to taxation data), newspapers, pictorial documentation, chronograms, epigraphic data, early instrumental observations, early scientific papers and communications are demonstrated with respect to extraction of information about HMEs, which concerns usually of their occurrence, severity, seasonality, meteorological causes, perception and human impacts. The paper further presents the analysis of 500-year variability of floods, droughts and windstorms on the base of series, created by combination of documentary and instrumental data. Results, advantages and drawbacks of such approach are documented on the examples from the Czech Lands. The analysis of floods concentrates on the River Vltava (Prague) and the River Elbe (Děčín) which show the highest frequency of floods occurring in the 19th century (mainly of winter synoptic type) and in the second half of the 16th century (summer synoptic type). Reported are also the most disastrous floods (August 1501, March and August 1598, February 1655, June 1675, February 1784, March 1845, February 1862, September 1890, August 2002) and the European context of floods in the severe winter 1783/84. Drought
Directory of Open Access Journals (Sweden)
Kori Blankenship
2015-04-01
Full Text Available Reference ecological conditions offer important context for land managers as they assess the condition of their landscapes and provide benchmarks for desired future conditions. State-and-transition simulation models (STSMs are commonly used to estimate reference conditions that can be used to evaluate current ecosystem conditions and to guide land management decisions and activities. The LANDFIRE program created more than 1,000 STSMs and used them to assess departure from a mean reference value for ecosystems in the United States. While the mean provides a useful benchmark, land managers and researchers are often interested in the range of variability around the mean. This range, frequently referred to as the historical range of variability (HRV, offers model users improved understanding of ecosystem function, more information with which to evaluate ecosystem change and potentially greater flexibility in management options. We developed a method for using LANDFIRE STSMs to estimate the HRV around the mean reference condition for each model state in ecosystems by varying the fire probabilities. The approach is flexible and can be adapted for use in a variety of ecosystems. HRV analysis can be combined with other information to help guide complex land management decisions.
Estimation of indirect effect when the mediator is a censored variable.
Wang, Jian; Shete, Sanjay
2017-01-01
A mediation model explores the direct and indirect effects of an initial variable ( X) on an outcome variable ( Y) by including a mediator ( M). In many realistic scenarios, investigators observe censored data instead of the complete data. Current research in mediation analysis for censored data focuses mainly on censored outcomes, but not censored mediators. In this study, we proposed a strategy based on the accelerated failure time model and a multiple imputation approach. We adapted a measure of the indirect effect for the mediation model with a censored mediator, which can assess the indirect effect at both the group and individual levels. Based on simulation, we established the bias in the estimations of different paths (i.e. the effects of X on M [ a], of M on Y [ b] and of X on Y given mediator M [ c']) and indirect effects when analyzing the data using the existing approaches, including a naïve approach implemented in software such as Mplus, complete-case analysis, and the Tobit mediation model. We conducted simulation studies to investigate the performance of the proposed strategy compared to that of the existing approaches. The proposed strategy accurately estimates the coefficients of different paths, indirect effects and percentages of the total effects mediated. We applied these mediation approaches to the study of SNPs, age at menopause and fasting glucose levels. Our results indicate that there is no indirect effect of association between SNPs and fasting glucose level that is mediated through the age at menopause.
Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim
2018-06-01
The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Directory of Open Access Journals (Sweden)
Ching-Chih Lee
Full Text Available BACKGROUND: To compare the infection rates between cetuximab-treated patients with head and neck cancers (HNC and untreated patients. METHODOLOGY: A national cohort of 1083 HNC patients identified in 2010 from the Taiwan National Health Insurance Research Database was established. After patients were followed for one year, propensity score analysis and instrumental variable analysis were performed to assess the association between cetuximab therapy and the infection rates. RESULTS: HNC patients receiving cetuximab (n = 158 were older, had lower SES, and resided more frequently in rural areas as compared to those without cetuximab therapy. 125 patients, 32 (20.3% in the group using cetuximab and 93 (10.1% in the group not using it presented infections. The propensity score analysis revealed a 2.3-fold (adjusted odds ratio [OR] = 2.27; 95% CI, 1.46-3.54; P = 0.001 increased risk for infection in HNC patients treated with cetuximab. However, using IVA, the average treatment effect of cetuximab was not statistically associated with increased risk of infection (OR, 0.87; 95% CI, 0.61-1.14. CONCLUSIONS: Cetuximab therapy was not statistically associated with infection rate in HNC patients. However, older HNC patients using cetuximab may incur up to 33% infection rate during one year. Particular attention should be given to older HNC patients treated with cetuximab.
Burns, Darren K; Jones, Andrew P; Goryakin, Yevgeniy; Suhrcke, Marc
2017-05-01
There is a scarcity of quantitative research into the effect of FDI on population health in low and middle income countries (LMICs). This paper investigates the relationship using annual panel data from 85 LMICs between 1974 and 2012. When controlling for time trends, country fixed effects, correlation between repeated observations, relevant covariates, and endogeneity via a novel instrumental variable approach, we find FDI to have a beneficial effect on overall health, proxied by life expectancy. When investigating age-specific mortality rates, we find a stronger beneficial effect of FDI on adult mortality, yet no association with either infant or child mortality. Notably, FDI effects on health remain undetected in all models which do not control for endogeneity. Exploring the effect of sector-specific FDI on health in LMICs, we provide preliminary evidence of a weak inverse association between secondary (i.e. manufacturing) sector FDI and overall life expectancy. Our results thus suggest that FDI has provided an overall benefit to population health in LMICs, particularly in adults, yet investments into the secondary sector could be harmful to health. Copyright © 2017 Elsevier Ltd. All rights reserved.
Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca
2017-04-01
Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface
Scoring the Icecap-A Capability Instrument. Estimation of a UK General Population Tariff†
Flynn, Terry N; Huynh, Elisabeth; Peters, Tim J; Al-Janabi, Hareth; Clemens, Sam; Moody, Alison; Coast, Joanna
2015-01-01
This paper reports the results of a best–worst scaling (BWS) study to value the Investigating Choice Experiments Capability Measure for Adults (ICECAP-A), a new capability measure among adults, in a UK setting. A main effects plan plus its foldover was used to estimate weights for each of the four levels of all five attributes. The BWS study was administered to 413 randomly sampled individuals, together with sociodemographic and other questions. Scale-adjusted latent class analyses identified two preference and two (variance) scale classes. Ability to characterize preference and scale heterogeneity was limited, but data quality was good, and the final model exhibited a high pseudo-r-squared. After adjusting for heterogeneity, a population tariff was estimated. This showed that ‘attachment’ and ‘stability’ each account for around 22% of the space, and ‘autonomy’, ‘achievement’ and ‘enjoyment’ account for around 18% each. Across all attributes, greater value was placed on the difference between the lowest levels of capability than between the highest. This tariff will enable ICECAP-A to be used in economic evaluation both within the field of health and across public policy generally. © 2013 The Authors. Health Economics published by John Wiley & Sons Ltd. PMID:24254584
Scoring the Icecap-a capability instrument. Estimation of a UK general population tariff.
Flynn, Terry N; Huynh, Elisabeth; Peters, Tim J; Al-Janabi, Hareth; Clemens, Sam; Moody, Alison; Coast, Joanna
2015-03-01
This paper reports the results of a best-worst scaling (BWS) study to value the Investigating Choice Experiments Capability Measure for Adults (ICECAP-A), a new capability measure among adults, in a UK setting. A main effects plan plus its foldover was used to estimate weights for each of the four levels of all five attributes. The BWS study was administered to 413 randomly sampled individuals, together with sociodemographic and other questions. Scale-adjusted latent class analyses identified two preference and two (variance) scale classes. Ability to characterize preference and scale heterogeneity was limited, but data quality was good, and the final model exhibited a high pseudo-r-squared. After adjusting for heterogeneity, a population tariff was estimated. This showed that 'attachment' and 'stability' each account for around 22% of the space, and 'autonomy', 'achievement' and 'enjoyment' account for around 18% each. Across all attributes, greater value was placed on the difference between the lowest levels of capability than between the highest. This tariff will enable ICECAP-A to be used in economic evaluation both within the field of health and across public policy generally. © 2013 The Authors. Health Economics published by John Wiley & Sons Ltd.
Mroz, T A
1999-10-01
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
International Nuclear Information System (INIS)
Tate, K.; Parshotam, A.; Scott, Neal
1997-01-01
The role of the terrestrial biosphere in the global carbon (C) cycle is poorly understood because of the complex biology underlying C storage, the spatial variability of vegetation and soils, and the effects of land use. Little is known about the nature, amount and variability of recalcitrant C in soils, despite the importance of determining whether soils behave as sources or sinks of CO 2 . 14 C dating indicates that most soils contain this very stable C fraction, with turnover times of millennia. The amount of this fraction, named the Inert Organic Matter (IOM) in one model, is estimated indirectly using the 'bomb' 14 C content of soil. In nine New Zealand grassland and forest ecosystems, amounts of IOM-C ranged between 0.03 to 2.9 kg C m -2 (1-18% of soil C to 0.25m depth). A decomposable C fraction, considered to be more susceptible to the effects of climate and land use, was estimated by subtracting the IOM-C fraction from the total soil organic C. Turnover times ranged between 8 and 36 years, and were inversely related to mean annual temperature (R 2 0.91, P 13 C NMR and pyrolysis-mass spectrometry as alkyl C. Paradoxically, for some ecosystems, the variation in IOM-C appears to be best explained by differences in soil hydrological conditions rather than by the accumulation of a discrete C fraction. Thus characterisation of environmental factors that constrain decomposition could be most useful for explaining the differences observed in IOM across different ecosystems, climates and soils. Despite the insights the modelling approach using 'bomb' 14 C provides into mechanisms for organic matter stabilisation, on theoretical grounds the validity of using 14 C measurements to estimate a recalcitrant C fraction that by definition contains no 14 C is questionable. We conclude that more rigorous models are needed with pools that can be experimentally verified, to improve understanding of the spatial variability of soil C storage. (author)
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2015-08-01
Full Text Available Mangroves are found throughout the tropics, providing critical ecosystem goods and services to coastal communities and supporting rich biodiversity. Globally, mangroves are being rapidly degraded and deforested at rates exceeding loss in many tropical inland forests. Madagascar contains around 2% of the global distribution, >20% of which has been deforested since 1990, primarily from over-harvest for forest products and conversion for agriculture and aquaculture. While historically not prominent, mangrove loss in Madagascar’s Mahajamba Bay is increasing. Here, we focus on Mahajamba Bay, presenting long-term dynamics calculated using United States Geological Survey (USGS national-level mangrove maps contextualized with socio-economic research and ground observations, and the results of contemporary (circa 2011 mapping of dominant mangrove types. The analysis of the USGS data indicated 1050 hectares (3.8% lost from 2000 to 2010, which socio-economic research suggests is increasingly driven by commercial timber extraction. Contemporary mapping results permitted stratified sampling based on spectrally distinct and ecologically meaningful mangrove types, allowing for the first-ever vegetation carbon stock estimates for Mahajamba Bay. The overall mean carbon stock across all mangrove classes was estimated to be 100.97 ± 10.49 Mg C ha−1. High stature closed-canopy mangroves had the highest average carbon stock estimate (i.e., 166.82 ± 15.28 Mg C ha−1. These estimates are comparable to other published values in Madagascar and elsewhere in the Western Indian Ocean and demonstrate the ecological variability of Mahajamba Bay’s mangroves and their value towards climate change mitigation.
Harris, Steve; Singer, Mervyn; Sanderson, Colin; Grieve, Richard; Harrison, David; Rowan, Kathryn
2018-05-07
To estimate the effect of prompt admission to critical care on mortality for deteriorating ward patients. We performed a prospective cohort study of consecutive ward patients assessed for critical care. Prompt admissions (within 4 h of assessment) were compared to a 'watchful waiting' cohort. We used critical care strain (bed occupancy) as a natural randomisation event that would predict prompt transfer to critical care. Strain was classified as low, medium or high (2+, 1 or 0 empty beds). This instrumental variable (IV) analysis was repeated for the subgroup of referrals with a recommendation for critical care once assessed. Risk-adjusted 90-day survival models were also constructed. A total of 12,380 patients from 48 hospitals were available for analysis. There were 2411 (19%) prompt admissions (median delay 1 h, IQR 1-2) and 9969 (81%) controls; 1990 (20%) controls were admitted later (median delay 11 h, IQR 6-26). Prompt admissions were less frequent (p care. In the risk-adjust survival model, 90-day mortality was similar. After allowing for unobserved prognostic differences between the groups, we find that prompt admission to critical care leads to lower 90-day mortality for patients assessed and recommended to critical care.
Michael E. Goerndt; Vicente J. Monleon; Hailemariam. Temesgen
2011-01-01
One of the challenges often faced in forestry is the estimation of forest attributes for smaller areas of interest within a larger population. Small-area estimation (SAE) is a set of techniques well suited to estimation of forest attributes for small areas in which the existing sample size is small and auxiliary information is available. Selected SAE methods were...
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
A model for estimating pathogen variability in shellfish and predicting minimum depuration times.
McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick
2018-01-01
Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist
Boef, Anna G C; Souverein, Patrick C|info:eu-repo/dai/nl/243074948; Vandenbroucke, Jan P; van Hylckama Vlieg, Astrid; de Boer, Anthonius|info:eu-repo/dai/nl/075097346; le Cessie, Saskia; Dekkers, Olaf M
2016-01-01
PURPOSE: A potentially useful role for instrumental variable (IV) analysis may be as a complementary analysis to assess the presence of confounding when studying adverse drug effects. There has been discussion on whether the observed increased risk of venous thromboembolism (VTE) for
Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd
2014-05-01
Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Global Ocean Evaporation: How Well Can We Estimate Interannual to Decadal Variability?
Robertson, Franklin R.; Bosilovich, Michael G.; Roberts, Jason B.; Wang, Hailan
2015-01-01
Evaporation from the world's oceans constitutes the largest component of the global water balance. It is important not only as the ultimate source of moisture that is tied to the radiative processes determining Earth's energy balance but also to freshwater availability over land, governing habitability of the planet. Here we focus on variability of ocean evaporation on scales from interannual to decadal by appealing to three sources of data: the new MERRA-2 (Modern-Era Retrospective analysis for Research and Applications -2); climate models run with historical sea-surface temperatures, ice and atmospheric constituents (so-called AMIP experiments); and state-of-the-art satellite retrievals from the Seaflux and HOAPS (Hamburg Ocean-Atmosphere Parameters and Fluxes from Satellite) projects. Each of these sources has distinct advantages as well as drawbacks. MERRA-2, like other reanalyses, synthesizes evaporation estimates consistent with observationally constrained physical and dynamical models-but data stream discontinuities are a major problem for interpreting multi-decadal records. The climate models used in data assimilation can also be run with lesser constraints such as with SSTs and sea-ice (i.e. AMIPs) or with additional, minimal observations of surface pressure and marine observations that have longer and less fragmentary observational records. We use the new ERA-20C reanalysis produced by ECMWF embodying the latter methodology. Still, the model physics biases in climate models and the lack of a predicted surface energy balance are of concern. Satellite retrievals and comparisons to ship-based measurements offer the most observationally-based estimates, but sensor inter-calibration, algorithm retrieval assumptions, and short records are dominant issues. Our strategy depends on maximizing the advantages of these combined records. The primary diagnostic tool used here is an analysis of bulk aerodynamic computations produced by these sources and uses a first
Brus, D.J.; Gruijter, de J.J.
2003-01-01
In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be
van Zyl, J. Martin
2012-01-01
Random variables of the generalized Pareto distribution, can be transformed to that of the Pareto distribution. Explicit expressions exist for the maximum likelihood estimators of the parameters of the Pareto distribution. The performance of the estimation of the shape parameter of generalized Pareto distributed using transformed observations, based on the probability weighted method is tested. It was found to improve the performance of the probability weighted estimator and performs good wit...
Directory of Open Access Journals (Sweden)
Fei Jin
2013-05-01
Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.
Brunelli, Alessandro; Salati, Michele; Refai, Majed; Xiumé, Francesco; Rocco, Gaetano; Sabbatini, Armando
2007-09-01
The objectives of this study were to develop a risk-adjusted model to estimate individual postoperative costs after major lung resection and to use it for internal economic audit. Variable and fixed hospital costs were collected for 679 consecutive patients who underwent major lung resection from January 2000 through October 2006 at our unit. Several preoperative variables were used to develop a risk-adjusted econometric model from all patients operated on during the period 2000 through 2003 by a stepwise multiple regression analysis (validated by bootstrap). The model was then used to estimate the postoperative costs in the patients operated on during the 3 subsequent periods (years 2004, 2005, and 2006). Observed and predicted costs were then compared within each period by the Wilcoxon signed rank test. Multiple regression and bootstrap analysis yielded the following model predicting postoperative cost: 11,078 + 1340.3X (age > 70 years) + 1927.8X cardiac comorbidity - 95X ppoFEV1%. No differences between predicted and observed costs were noted in the first 2 periods analyzed (year 2004, $6188.40 vs $6241.40, P = .3; year 2005, $6308.60 vs $6483.60, P = .4), whereas in the most recent period (2006) observed costs were significantly lower than the predicted ones ($3457.30 vs $6162.70, P model may be used as a methodologic template for economic audit in our specialty and complement more traditional outcome measures in the assessment of performance.
Gajewski, Byron J.; Jiang, Yu; Yeh, Hung-Wen; Engelman, Kimberly; Teel, Cynthia; Choi, Won S.; Greiner, K. Allen; Daley, Christine Makosky
2013-01-01
Texts and software that we are currently using for teaching multivariate analysis to non-statisticians lack in the delivery of confirmatory factor analysis (CFA). The purpose of this paper is to provide educators with a complement to these resources that includes CFA and its computation. We focus on how to use CFA to estimate a “composite reliability” of a psychometric instrument. This paper provides guidance for introducing, via a case-study, the non-statistician to CFA. As a complement to our instruction about the more traditional SPSS, we successfully piloted the software R for estimating CFA on nine non-statisticians. This approach can be used with healthcare graduate students taking a multivariate course, as well as modified for community stakeholders of our Center for American Indian Community Health (e.g. community advisory boards, summer interns, & research team members). The placement of CFA at the end of the class is strategic and gives us an opportunity to do some innovative teaching: (1) build ideas for understanding the case study using previous course work (such as ANOVA); (2) incorporate multi-dimensional scaling (that students already learned) into the selection of a factor structure (new concept); (3) use interactive data from the students (active learning); (4) review matrix algebra and its importance to psychometric evaluation; (5) show students how to do the calculation on their own; and (6) give students access to an actual recent research project. PMID:24772373
Novelli, Anna; Hens, Korbinian; Tatum Ernest, Cheryl; Martinez, Monica; Nölscher, Anke C.; Sinha, Vinayak; Paasonen, Pauli; Petäjä, Tuukka; Sipilä, Mikko; Elste, Thomas; Plass-Dülmer, Christian; Phillips, Gavin J.; Kubistin, Dagmar; Williams, Jonathan; Vereecken, Luc; Lelieveld, Jos; Harder, Hartwig
2017-06-01
We analysed the extensive dataset from the HUMPPA-COPEC 2010 and the HOPE 2012 field campaigns in the boreal forest and rural environments of Finland and Germany, respectively, and estimated the abundance of stabilised Criegee intermediates (SCIs) in the lower troposphere. Based on laboratory tests, we propose that the background OH signal observed in our IPI-LIF-FAGE instrument during the aforementioned campaigns is caused at least partially by SCIs. This hypothesis is based on observed correlations with temperature and with concentrations of unsaturated volatile organic compounds and ozone. Just like SCIs, the background OH concentration can be removed through the addition of sulfur dioxide. SCIs also add to the previously underestimated production rate of sulfuric acid. An average estimate of the SCI concentration of ˜ 5.0 × 104 molecules cm-3 (with an order of magnitude uncertainty) is calculated for the two environments. This implies a very low ambient concentration of SCIs, though, over the boreal forest, significant for the conversion of SO2 into H2SO4. The large uncertainties in these calculations, owing to the many unknowns in the chemistry of Criegee intermediates, emphasise the need to better understand these processes and their potential effect on the self-cleaning capacity of the atmosphere.
Impact of ground motion characterization on conservatism and variability in seismic risk estimates
International Nuclear Information System (INIS)
Sewell, R.T.; Toro, G.R.; McGuire, R.K.
1996-07-01
This study evaluates the impact, on estimates of seismic risk and its uncertainty, of alternative methods in treatment and characterization of earthquake ground motions. The objective of this study is to delineate specific procedures and characterizations that may lead to less biased and more precise seismic risk results. This report focuses on sources of conservatism and variability in risk that may be introduced through the analytical processes and ground-motion descriptions which are commonly implemented at the interface of seismic hazard and fragility assessments. In particular, implication of the common practice of using a single, composite spectral shape to characterize motions of different magnitudes is investigated. Also, the impact of parameterization of ground motion on fragility and hazard assessments is shown. Examination of these results demonstrates the following. (1) There exists significant conservatism in the review spectra (usually, spectra characteristic of western U.S. earthquakes) that have been used in conducting past seismic risk assessments and seismic margin assessments for eastern U.S. nuclear power plants. (2) There is a strong dependence of seismic fragility on earthquake magnitude when PGA is used as the ground-motion characterization. When, however, magnitude-dependent spectra are anchored to a common measure of elastic spectral acceleration averaged over the appropriate frequency range, seismic fragility shows no important nor consistent dependence on either magnitude or strong-motion duration. Use of inelastic spectral acceleration (at the proper frequency) as the ground spectrum anchor demonstrates a very similar result. This study concludes that a single, composite-magnitude spectrum can generally be used to characterize ground motion for fragility assessment without introducing significant bias or uncertainty in seismic risk estimates
Impact of ground motion characterization on conservatism and variability in seismic risk estimates
Energy Technology Data Exchange (ETDEWEB)
Sewell, R.T.; Toro, G.R.; McGuire, R.K.
1996-07-01
This study evaluates the impact, on estimates of seismic risk and its uncertainty, of alternative methods in treatment and characterization of earthquake ground motions. The objective of this study is to delineate specific procedures and characterizations that may lead to less biased and more precise seismic risk results. This report focuses on sources of conservatism and variability in risk that may be introduced through the analytical processes and ground-motion descriptions which are commonly implemented at the interface of seismic hazard and fragility assessments. In particular, implication of the common practice of using a single, composite spectral shape to characterize motions of different magnitudes is investigated. Also, the impact of parameterization of ground motion on fragility and hazard assessments is shown. Examination of these results demonstrates the following. (1) There exists significant conservatism in the review spectra (usually, spectra characteristic of western U.S. earthquakes) that have been used in conducting past seismic risk assessments and seismic margin assessments for eastern U.S. nuclear power plants. (2) There is a strong dependence of seismic fragility on earthquake magnitude when PGA is used as the ground-motion characterization. When, however, magnitude-dependent spectra are anchored to a common measure of elastic spectral acceleration averaged over the appropriate frequency range, seismic fragility shows no important nor consistent dependence on either magnitude or strong-motion duration. Use of inelastic spectral acceleration (at the proper frequency) as the ground spectrum anchor demonstrates a very similar result. This study concludes that a single, composite-magnitude spectrum can generally be used to characterize ground motion for fragility assessment without introducing significant bias or uncertainty in seismic risk estimates.
Directory of Open Access Journals (Sweden)
Malcolm D O'Toole
Full Text Available The deployment of animal-borne electronic tags is revolutionizing our understanding of how pelagic species respond to their environment by providing in situ oceanographic information such as temperature, salinity, and light measurements. These tags, deployed on pelagic animals, provide data that can be used to study the ecological context of their foraging behaviour and surrounding environment. Satellite-derived measures of ocean colour reveal temporal and spatial variability of surface chlorophyll-a (a useful proxy for phytoplankton distribution. However, this information can be patchy in space and time resulting in poor correspondence with marine animal behaviour. Alternatively, light data collected by animal-borne tag sensors can be used to estimate chlorophyll-a distribution. Here, we use light level and depth data to generate a phytoplankton index that matches daily seal movements. Time-depth-light recorders (TDLRs were deployed on 89 southern elephant seals (Mirounga leonina over a period of 6 years (1999-2005. TDLR data were used to calculate integrated light attenuation of the top 250 m of the water column (LA(250, which provided an index of phytoplankton density at the daily scale that was concurrent with the movement and behaviour of seals throughout their entire foraging trip. These index values were consistent with typical seasonal chl-a patterns as measured from 8-daySea-viewing Wide Field-of-view Sensor (SeaWiFs images. The availability of data recorded by the TDLRs was far greater than concurrent remotely sensed chl-a at higher latitudes and during winter months. Improving the spatial and temporal availability of phytoplankton information concurrent with animal behaviour has ecological implications for understanding the movement of deep diving predators in relation to lower trophic levels in the Southern Ocean. Light attenuation profiles recorded by animal-borne electronic tags can be used more broadly and routinely to estimate
Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.
2017-12-01
Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH
Jury, William A.; Gruber, Joachim
1989-12-01
Soil and climatic variability contribute in an unknown manner to the leaching of pesticides below the surface soil zone where degradation occurs at maximum levels. In this paper we couple the climatic variability model of Eagleson (1978) to the soil variability transport model of Jury (1982) to produce a probability density distribution of residual mass fraction (RMF) remaining after leaching below the surface degradation zone. Estimates of the RMF distribution are shown to be much more sensitive to soil variability than climatic variability, except when the residence time of the chemical is shorter than one year. When soil variability dominates climatic variability, the applied water distribution may be replaced by a constant average water application rate without serious error. Simulations of leaching are run with 10 pesticides in two climates and in two representative soil types with a range of soil variability. Variability in soil or climate act to produce a nonnegligible probability of survival of a small value of residual mass even for relatively immobile compounds which are predicted to degrade completely by a simple model which neglects variability. However, the simpler model may still be useful for screening pesticides for groundwater pollution potential if somewhat larger residual masses of a given compound are tolerated. Monte Carlo simulations of the RMF distribution agreed well with model predictions over a wide range of pesticide properties.
Varella, H.-V.
2009-04-01
Dynamic crop models are very useful to predict the behavior of crops in their environment and are widely used in a lot of agro-environmental work. These models have many parameters and their spatial application require a good knowledge of these parameters, especially of the soil parameters. These parameters can be estimated from soil analysis at different points but this is very costly and requires a lot of experimental work. Nevertheless, observations on crops provided by new techniques like remote sensing or yield monitoring, is a possibility for estimating soil parameters through the inversion of crop models. In this work, the STICS crop model is studied for the wheat and the sugar beet and it includes more than 200 parameters. After a previous work based on a large experimental database for calibrate parameters related to the characteristics of the crop, a global sensitivity analysis of the observed variables (leaf area index LAI and absorbed nitrogen QN provided by remote sensing data, and yield at harvest provided by yield monitoring) to the soil parameters is made, in order to determine which of them have to be estimated. This study was made in different climatic and agronomic conditions and it reveals that 7 soil parameters (4 related to the water and 3 related to the nitrogen) have a clearly influence on the variance of the observed variables and have to be therefore estimated. For estimating these 7 soil parameters, a Bayesian data assimilation method is chosen (because of available prior information on these parameters) named Importance Sampling by using observations, on wheat and sugar beet crop, of LAI and QN at various dates and yield at harvest acquired on different climatic and agronomic conditions. The quality of parameter estimation is then determined by comparing the result of parameter estimation with only prior information and the result with the posterior information provided by the Bayesian data assimilation method. The result of the
Habibov, Nazim
2016-03-01
There is the lack of consensus about the effect of corruption on healthcare satisfaction in transitional countries. Interpreting the burgeoning literature on this topic has proven difficult due to reverse causality and omitted variable bias. In this study, the effect of corruption on healthcare satisfaction is investigated in a set of 12 Post-Socialist countries using instrumental variable regression on the sample of 2010 Life in Transition survey (N = 8655). The results indicate that experiencing corruption significantly reduces healthcare satisfaction. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Estimation of Hedonic Single-Family House Price Function Considering Neighborhood Effect Variables
Directory of Open Access Journals (Sweden)
Chihiro Shimizu
2014-05-01
Full Text Available In the formulation of hedonic models, in addition to locational factors and building structures which affect the house prices, the generation of the omitted variable bias is thought to occur in cases when local environmental variables and the individual characteristics of house buyers are not taken into consideration. However, since it is difficult to obtain local environmental information in a small neighborhood unit and to observe individual characteristics of house buyers, these variables have not been sufficiently considered in previous studies. We demonstrated that non-negligible levels of omitted variable bias are generated if these variables are not considered.
Stiefenhofer, Johann; Thurston, Malcolm L.; Bush, David E.
2018-04-01
Microdiamonds offer several advantages as a resource estimation tool, such as access to deeper parts of a deposit which may be beyond the reach of large diameter drilling (LDD) techniques, the recovery of the total diamond content in the kimberlite, and a cost benefit due to the cheaper treatment cost compared to large diameter samples. In this paper we take the first step towards local estimation by showing that micro-diamond samples can be treated as a regionalised variable suitable for use in geostatistical applications and we show examples of such output. Examples of microdiamond variograms are presented, the variance-support relationship for microdiamonds is demonstrated and consistency of the diamond size frequency distribution (SFD) is shown with the aid of real datasets. The focus therefore is on why local microdiamond estimation should be possible, not how to generate such estimates. Data from our case studies and examples demonstrate a positive correlation between micro- and macrodiamond sample grades as well as block estimates. This relationship can be demonstrated repeatedly across multiple mining operations. The smaller sample support size for microdiamond samples is a key difference between micro- and macrodiamond estimates and this aspect must be taken into account during the estimation process. We discuss three methods which can be used to validate or reconcile the estimates against macrodiamond data, either as estimates or in the form of production grades: (i) reconcilliation using production data, (ii) by comparing LDD-based grade estimates against microdiamond-based estimates and (iii) using simulation techniques.
Exploratory Long-Range Models to Estimate Summer Climate Variability over Southern Africa.
Jury, Mark R.; Mulenga, Henry M.; Mason, Simon J.
1999-07-01
Teleconnection predictors are explored using multivariate regression models in an effort to estimate southern African summer rainfall and climate impacts one season in advance. The preliminary statistical formulations include many variables influenced by the El Niño-Southern Oscillation (ENSO) such as tropical sea surface temperatures (SST) in the Indian and Atlantic Oceans. Atmospheric circulation responses to ENSO include the alternation of tropical zonal winds over Africa and changes in convective activity within oceanic monsoon troughs. Numerous hemispheric-scale datasets are employed to extract predictors and include global indexes (Southern Oscillation index and quasi-biennial oscillation), SST principal component scores for the global oceans, indexes of tropical convection (outgoing longwave radiation), air pressure, and surface and upper winds over the Indian and Atlantic Oceans. Climatic targets include subseasonal, area-averaged rainfall over South Africa and the Zambezi river basin, and South Africa's annual maize yield. Predictors and targets overlap in the years 1971-93, the defined training period. Each target time series is fitted by an optimum group of predictors from the preceding spring, in a linear multivariate formulation. To limit artificial skill, predictors are restricted to three, providing 17 degrees of freedom. Models with colinear predictors are screened out, and persistence of the target time series is considered. The late summer rainfall models achieve a mean r2 fit of 72%, contributed largely through ENSO modulation. Early summer rainfall cross validation correlations are lower (61%). A conceptual understanding of the climate dynamics and ocean-atmosphere coupling processes inherent in the exploratory models is outlined.Seasonal outlooks based on the exploratory models could help mitigate the impacts of southern Africa's fluctuating climate. It is believed that an advance warning of drought risk and seasonal rainfall prospects will
3D.07: CORRELATION BETWEEN THE ARTERIAL PRESSURE VARIABILITY ESTIMATED AT CLINICS, MAPA AND AMPA.
Abellan-Huerta, J; García-Escribano, I A; Soto, R M; Leal, M; Torres, A; Guerrero, B; Melgar, A C; Soto, M; Soria, F; Abellan-Aleman, J
2015-06-01
To measure the variability (VB) of the arterial pressure (AP) with the use of serial measurements at the clinics (VBCLIN), with 24 h ambulatory monitoring (MAPA) (VBMAPA) and home automonitoring -AMPA- (VBAMPA) and to estimate a relationship among each method. This is an observational, descriptive and transversal study assessed with 91 hypertensive patients in treatment and stable with AP MAPA was assessed to all the patients included in the study in order to obtain the VBMAPA and an AMPA in two non-consecutive weeks to obtain the VBAMPA (total of 54 measurements). 91 patients with 66 ± 7.7 years old and 58.2% males were recruited. AP values were 134 ± 14/82 ± 10 mmHg for systolic and diastolic APCLIN, respectively. AP values were 122 ± 17 / 68 ± 12 mmHg for systolic and diastolic APMAPA, respectively. AP values were 125 ± 13/75 ± 7 mmHg for systolic and diastolic APAMPA, respectively. The systolic VB for the three above methods was significantly correlated being maximal between VBCLIN and VBAMPA (r = 0.45; 0 MAPA methods is weak. This observation suggests that these are not interchangeable methodologies. Future studies focused on the relationship between VB -with different methods- and vascular target organ damage would be of great help in order to define the best analytical method.
Ringeval, B.; de Noblet-Ducoudre, N.; Prigent, C.; Bousquet, P.
2006-12-01
The atmospheric methane growth rate presents lots of seasonal and year-to-year variations. Large uncertainties still exist in the relative part of differents sources and sinks on these variations. We have considered, in this study, the main natural sources of methane and the supposed main variable source, i.e. wetlands, and tried to simulate the variations of their emissions considering the variability of the wetland extent and of the climate. For this study, we use the methane emission model of Walter et al. (2001) and the quantification of the flooded areas for the years 1993-2000 obtained with a suite of satellite observations by Prigent et al. (2001). The data necessary to the Walter's model are obtained with simulation of a dynamic global vegetation model ORCHIDEE (Krinner et al. (2005)) constrained by the NCC climate data (Ngo-Duc et al. (2005)) and after imposing a water-saturated soil to approach productivity of wetlands. We calculate global annual methane emissions from wetlands to be 400 Tg per year, that is higher than previous results obtained with fixed wetland extent. Simulations are realised to estimate the part of variability in the emissions explained by the variability of the wetland extent. It seems that the year-to-year emission variability is mainly explained by the interannual variability of wetland extent. The seasonnal variability is explained for 75% in the tropics and only for 40% in the north of 30°N by variability of wetlands extend. Finally, we compare results with a top-down approach of Bousquet et al.(2006).
International Nuclear Information System (INIS)
Barr, A.G.; McGinn, S.M.; Cheng, S.B.
1996-01-01
Historic estimates of daily global solar irradiation are often required for climatic impact studies. Regression equations with daily global solar irradiation, H, as the dependent variable and other climatic variables as the independent variables provide a practical way to estimate H at locations where it is not measured. They may also have potential to estimate H before 1953, the year of the first routine H measurements in Canada. This study compares several regression equations for calculating H on the Canadian prairies. Simple linear regression with daily bright sunshine duration as the dependent variable accounted for 90% of the variation of H in summer and 75% of the variation of H in winter. Linear regression with the daily air temperature range as the dependent variable accounted for 45% of the variation of H in summer and only 6% of the variation of H in winter. Linear regression with precipitation status (wet or dry) as the dependent variable accounted for only 35% of the summer-time variation in H, but stratifying other regression analyses into wet and dry days reduced their root-mean-squared errors. For periods with sufficiently dense bright sunshine observations (i.e. after 1960), however, H was more accurately estimated from spatially interpolated bright sunshine duration than from locally observed air temperature range or precipitation status. The daily air temperature range and precipitation status may have utility for estimating H for periods before 1953, when they are the only widely available climatic data on the Canadian prairies. Between 1953 and 1989, a period of large climatic variation, the regression coefficients did not vary significantly between contrasting years with cool-wet, intermediate and warm-dry summers. They should apply equally well earlier in the century. (author)
Takahashi, Makoto; Nakamoto, Tomoko; Matsukawa, Kanji; Ishii, Kei; Watanabe, Tae; Sekikawa, Kiyokazu; Hamada, Hironobu
2016-03-01
What is the central question of this study? Should we use the high-frequency (HF) component of P-P interval as an index of cardiac parasympathetic nerve activity during moderate exercise? What is the main finding and its importance? The HF component of P-P interval variability remained even at a heart rate of 120-140 beats min(-1) and was further reduced by atropine, indicating incomplete cardiac vagal withdrawal during moderate exercise. The HF component of R-R interval is invalid as an estimate of cardiac parasympathetic outflow during moderate exercise; instead, the HF component of P-P interval variability should be used. The high-frequency (HF) component of R-R interval variability has been widely used as an indirect estimate of cardiac parasympathetic (vagal) outflow to the sino-atrial node of the heart. However, we have recently found that the variability of the R-R interval becomes much smaller during dynamic exercise than that of the P-P interval above a heart rate (HR) of ∼100 beats min(-1). We hypothesized that cardiac parasympathetic outflow during dynamic exercise with a higher intensity may be better estimated using the HF component of P-P interval variability. To test this hypothesis, the HF components of both P-P and R-R interval variability were analysed using a Wavelet transform during dynamic exercise. Twelve subjects performed ergometer exercise to increase HR from the baseline of 69 ± 3 beats min(-1) to three different levels of 100, 120 and 140 beats min(-1). We also examined the effect of atropine sulfate on the HF components in eight of the 12 subjects during exercise at an HR of 140 beats min(-1) . The HF component of P-P interval variability was significantly greater than that of R-R interval variability during exercise, especially at the HRs of 120 and 140 beats min(-1). The HF component of P-P interval variability was more reduced by atropine than that of R-R interval variability. We conclude that cardiac parasympathetic outflow to the
On the growth estimates of entire functions of double complex variables
Directory of Open Access Journals (Sweden)
Sanjib Datta
2017-08-01
Full Text Available Recently Datta et al. (2016 introduced the idea of relative type and relative weak type of entire functions of two complex variables with respect to another entire function of two complex variables and prove some related growth properties of it. In this paper, further we study some growth properties of entire functions of two complex variables on the basis of their relative types and relative weak types as introduced by Datta et al (2016.
Gagnon, Dany H; Jouval, Camille; Chénier, Félix
2016-06-14
Using ground reaction forces recorded while propelling a manual wheelchair on an instrumented treadmill may represent a valuable alternative to using an instrumented pushrim to calculate temporal and kinetic parameters during propulsion. Sixteen manual wheelchair users propelled their wheelchair equipped with instrumented pushrims (i.e., SMARTWheel) on an instrumented dual-belt treadmill set a 1m/s during a 1-minute period. Spatio-temporal (i.e., duration of the push and recovery phase) and kinetic measures (i.e. propulsive moments) were calculated for 20 consecutive strokes for each participant. Strong associations were confirmed between the treadmill and the instrumented pushrim for the mean duration of the push phase (r=0.98) and of the recovery phase (r=0.99). Good agreement between these two measurement instruments was also confirmed with mean differences of only 0.028s for the push phase and 0.012s for the recovery phase. Strong associations were confirmed between the instrumented wheelchair pushrim and treadmill for mean (r=0.97) and peak (r=0.96) propulsive moments. Good agreement between these two measurement instruments was also confirmed with mean differences of 0.50Nm (mean moment) and 0.71Nm (peak moment). The use of a dual-belt instrumented treadmill represents an alternative to characterizing temporal parameters and propulsive moments during manual wheelchair propulsion. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jo, Il-Hyun; Park, Yeonjeong; Yoon, Meehyun; Sung, Hanall
2016-01-01
The purpose of this study was to identify the relationship between the psychological variables and online behavioral patterns of students, collected through a learning management system (LMS). As the psychological variable, time and study environment management (TSEM), one of the sub-constructs of MSLQ, was chosen to verify a set of time-related…
Graham, Wendy D.; Tankersley, Claude D.
1994-05-01
Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.
Directory of Open Access Journals (Sweden)
R. Moratiel
2013-06-01
Full Text Available In agricultural ecosystems the use of evapotranspiration (ET to improve irrigation water management is generally widespread. Commonly, the crop ET (ETc is estimated by multiplying the reference crop evapotranspiration (ETo by a crop coefficient (Kc. Accurate estimation of ETo is critical because it is the main factor affecting the calculation of crop water use and water management. The ETo is generally estimated from recorded meteorological variables at reference weather stations. The main objective of this paper was assessing the effect of the uncertainty due to random noise in the sensors used for measurement of meteorological variables on the estimation of ETo, crop ET and net irrigation requirements of grain corn and alfalfa in three irrigation districts of the middle Ebro River basin. Five scenarios were simulated, four of them individually considering each recorded meteorological variable (temperature, relative humidity, solar radiation and wind speed and a fifth scenario combining together the uncertainty of all sensors. The uncertainty in relative humidity for irrigation districts Riegos del Alto Aragón (RAA and Bardenas (BAR, and temperature for irrigation district Canal de Aragón y Cataluña (CAC, were the two most important factors affecting the estimation of ETo, corn ET (ETc_corn, alfalfa ET (ETc_alf, net corn irrigation water requirements (IRncorn and net alfalfa irrigation water requirements (IRnalf. Nevertheless, this effect was never greater than ±0.5% over annual scale time. The wind speed variable (Scenario 3 was the third variable more influential in the fluctuations (± of evapotranspiration, followed by solar radiation. Considering the accuracy for all sensors over annual scale time, the variation was about ±1% of ETo, ETc_corn, ETc_alf, IRncorn, and IRnalf. The fluctuations of evapotranspiration were higher at shorter time scale. ETo daily fluctuation remained lower than 5 % during the growing season of corn and
DEFF Research Database (Denmark)
Henriksen, Otto Mølby; Larsson, Henrik B W; Hansen, Adam E
2012-01-01
PURPOSE: To investigate the within and between subject variability of quantitative cerebral blood flow (CBF) measurements in normal subjects using various MRI techniques and positron emission tomography (PET). MATERIALS AND METHODS: Repeated CBF measurements were performed in 17 healthy, young...
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Senkel, Luise
2016-01-01
This edited book aims at presenting current research activities in the field of robust variable-structure systems. The scope equally comprises highlighting novel methodological aspects as well as presenting the use of variable-structure techniques in industrial applications including their efficient implementation on hardware for real-time control. The target audience primarily comprises research experts in the field of control theory and nonlinear dynamics but the book may also be beneficial for graduate students.
Directory of Open Access Journals (Sweden)
Hua Zhang
2016-09-01
Full Text Available The estimation of spatially-variable actual evapotranspiration (AET is a critical challenge to regional water resources management. We propose a new remote sensing method, the Triangle Algorithm with Variable Edges (TAVE, to generate daily AET estimates based on satellite-derived land surface temperature and the vegetation index NDVI. The TAVE captures heterogeneity in AET across elevation zones and permits variability in determining local values of wet and dry end-member classes (known as edges. Compared to traditional triangle methods, TAVE introduces three unique features: (i the discretization of the domain as overlapping elevation zones; (ii a variable wet edge that is a function of elevation zone; and (iii variable values of a combined-effect parameter (that accounts for aerodynamic and surface resistance, vapor pressure gradient, and soil moisture availability along both wet and dry edges. With these features, TAVE effectively addresses the combined influence of terrain and water stress on semi-arid environment AET estimates. We demonstrate the effectiveness of this method in one of the driest countries in the world—Jordan, and compare it to a traditional triangle method (TA and a global AET product (MOD16 over different land use types. In irrigated agricultural lands, TAVE matched the results of the single crop coefficient model (−3%, in contrast to substantial overestimation by TA (+234% and underestimation by MOD16 (−50%. In forested (non-irrigated, water consuming regions, TA and MOD16 produced AET average deviations 15.5 times and −3.5 times of those based on TAVE. As TAVE has a simple structure and low data requirements, it provides an efficient means to satisfy the increasing need for evapotranspiration estimation in data-scarce semi-arid regions. This study constitutes a much needed step towards the satellite-based quantification of agricultural water consumption in Jordan.
Pauwels, V. R. N.; DeLannoy, G. J. M.; Hendricks Franssen, H.-J.; Vereecken, H.
2013-01-01
In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Directory of Open Access Journals (Sweden)
V. R. N. Pauwels
2013-09-01
Full Text Available In this paper, we present a two-stage hybrid Kalman filter to estimate both observation and forecast bias in hydrologic models, in addition to state variables. The biases are estimated using the discrete Kalman filter, and the state variables using the ensemble Kalman filter. A key issue in this multi-component assimilation scheme is the exact partitioning of the difference between observation and forecasts into state, forecast bias and observation bias updates. Here, the error covariances of the forecast bias and the unbiased states are calculated as constant fractions of the biased state error covariance, and the observation bias error covariance is a function of the observation prediction error covariance. In a series of synthetic experiments, focusing on the assimilation of discharge into a rainfall-runoff model, it is shown that both static and dynamic observation and forecast biases can be successfully estimated. The results indicate a strong improvement in the estimation of the state variables and resulting discharge as opposed to the use of a bias-unaware ensemble Kalman filter. Furthermore, minimal code modification in existing data assimilation software is needed to implement the method. The results suggest that a better performance of data assimilation methods should be possible if both forecast and observation biases are taken into account.
Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan
2015-01-01
This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129
Yokoi, Toshiyuki; Itoh, Michimasa; Oguri, Koji
Most of the traffic accidents have been caused by inappropriate driver's mental state. Therefore, driver monitoring is one of the most important challenges to prevent traffic accidents. Some studies for evaluating the driver's mental state while driving have been reported; however driver's mental state should be estimated in real-time in the future. This paper proposes a way to estimate quantitatively driver's mental workload using heart rate variability. It is assumed that the tolerance to driver's mental workload is different depending on the individual. Therefore, we classify people based on their individual tolerance to mental workload. Our estimation method is multiple linear regression analysis, and we compare it to NASA-TLX which is used as the evaluation method of subjective mental workload. As a result, the coefficient of correlation improved from 0.83 to 0.91, and the standard deviation of error also improved. Therefore, our proposed method demonstrated the possibility to estimate mental workload.
Fetterly, Kenneth A; Favazza, Christopher P
2016-08-07
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the
CSIR Research Space (South Africa)
Ramoelo, Abel
2011-04-01
Full Text Available Information about the distribution of grass nitrogen (N) concentration is crucial in understanding rangeland vitality and facilitates effective management of wildlife and livestock. A challenge in estimating grass N concentration using remote...
Estimates for Future Tenure, Satisfaction and Biographical Variables as Predictors of Termination
Waters, L. K.; And Others
1976-01-01
Findings of this study indicate that an employee's estimate of his/her future tenure with a company is a more reliable indication of actual tenure than job-satisfaction or biographical factors. (Author/RW)
Sensitivity of potential evaporation estimates to 100 years of climate variability
Bartholomeus, R.P.; Stagge, J.H.; Tallaksen, L.M.; Witte, J.P.M.
2015-01-01
Hydrological modeling frameworks require an accurate representation of evaporation fluxes for appropriate quantification of, e.g., the water balance, soil moisture budget, recharge and groundwater processes. Many frameworks have used the concept of potential evaporation, often estimated for
Bayat, Bardia; Zahraie, Banafsheh; Taghavi, Farahnaz; Nasseri, Mohsen
2013-08-01
Identification of spatial and spatiotemporal precipitation variations plays an important role in different hydrological applications such as missing data estimation. In this paper, the results of Bayesian maximum entropy (BME) and ordinary kriging (OK) are compared for modeling spatial and spatiotemporal variations of annual precipitation with and without incorporating elevation variations. The study area of this research is Namak Lake watershed located in the central part of Iran with an area of approximately 90,000 km2. The BME and OK methods have been used to model the spatial and spatiotemporal variations of precipitation in this watershed, and their performances have been evaluated using cross-validation statistics. The results of the case study have shown the superiority of BME over OK in both spatial and spatiotemporal modes. The results have shown that BME estimates are less biased and more accurate than OK. The improvements in the BME estimates are mostly related to incorporating hard and soft data in the estimation process, which resulted in more detailed and reliable results. Estimation error variance for BME results is less than OK estimations in the study area in both spatial and spatiotemporal modes.
Directory of Open Access Journals (Sweden)
Patrick McNamara
2010-01-01
Results. Patients' estimates of their own social functioning were not significantly different from examiners' estimates. The impact of clinical variables on social functioning in PD revealed depression to be the strongest association of social functioning in PD on both the patient and the examiner version of the Social Adaptation Self-Evaluation Scale. Conclusions. PD patients appear to be well aware of their social strengths and weaknesses. Depression and motor symptom severity are significant predictors of both self- and examiner reported social functioning in patients with PD. Assessment and treatment of depression in patients with PD may improve social functioning and overall quality of life.
Directory of Open Access Journals (Sweden)
Carlos Poblete-Echeverría
2015-01-01
Full Text Available Leaf area index (LAI is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io and transmitted radiation (I through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAID, which was compared with LAI estimated by the proposed digital photography method (LAIM. Results showed that the LAIM was able to estimate LAID with an error of 25% using a constant light extinction coefficient (k = 0.68. However, when k was estimated using an exponential function based on the fraction of foliage cover (ff derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-28
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAI(D)), which was compared with LAI estimated by the proposed digital photography method (LAI(M)). Results showed that the LAI(M) was able to estimate LAI(D) with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (f(f)) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions.
Poblete-Echeverría, Carlos; Fuentes, Sigfredo; Ortega-Farias, Samuel; Gonzalez-Talice, Jaime; Yuri, Jose Antonio
2015-01-01
Leaf area index (LAI) is one of the key biophysical variables required for crop modeling. Direct LAI measurements are time consuming and difficult to obtain for experimental and commercial fruit orchards. Devices used to estimate LAI have shown considerable errors when compared to ground-truth or destructive measurements, requiring tedious site-specific calibrations. The objective of this study was to test the performance of a modified digital cover photography method to estimate LAI in apple trees using conventional digital photography and instantaneous measurements of incident radiation (Io) and transmitted radiation (I) through the canopy. Leaf area of 40 single apple trees were measured destructively to obtain real leaf area index (LAID), which was compared with LAI estimated by the proposed digital photography method (LAIM). Results showed that the LAIM was able to estimate LAID with an error of 25% using a constant light extinction coefficient (k = 0.68). However, when k was estimated using an exponential function based on the fraction of foliage cover (ff) derived from images, the error was reduced to 18%. Furthermore, when measurements of light intercepted by the canopy (Ic) were used as a proxy value for k, the method presented an error of only 9%. These results have shown that by using a proxy k value, estimated by Ic, helped to increase accuracy of LAI estimates using digital cover images for apple trees with different canopy sizes and under field conditions. PMID:25635411
Choi, S.; Joiner, J.; Krotkov, N. A.; Choi, Y.; Duncan, B. N.; Celarier, E. A.; Bucsela, E. J.; Vasilkov, A. P.; Strahan, S. E.; Veefkind, J. P.; Cohen, R. C.; Weinheimer, A. J.; Pickering, K. E.
2013-12-01
Total column measurements of NO2 from space-based sensors are of interest to the atmospheric chemistry and air quality communities; the relatively short lifetime of near-surface NO2 produces satellite-observed hot-spots near pollution sources including power plants and urban areas. However, estimates of NO2 concentrations in the free-troposphere, where lifetimes are longer and the radiative impact through ozone formation is larger, are severely lacking. Such information is critical to evaluate chemistry-climate and air quality models that are used for prediction of the evolution of tropospheric ozone and its impact of climate and air quality. Here, we retrieve free-tropospheric NO2 volume mixing ratio (VMR) using the cloud slicing technique. We use cloud optical centroid pressures (OCPs) as well as collocated above-cloud vertical NO2 columns (defined as the NO2 column from top of the atmosphere to the cloud OCP) from the Ozone Monitoring Instrument (OMI). The above-cloud NO2 vertical columns used in our study are retrieved independent of a priori NO2 profile information. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud optical centroid pressure is proportional to the NO2 volume mixing ratio (VMR) for a given pressure (altitude) range. We retrieve NO2 volume mixing ratios and compare the obtained NO2 VMRs with in-situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is good when proper data screening is applied. In addition, the OMI cloud slicing reports a high NO2 VMR where the aircraft reported lightning NOx during the Deep Convection Clouds and Chemistry (DC3) campaign in 2012. We also provide a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the
Directory of Open Access Journals (Sweden)
Paul S Lavery
Full Text Available The recent focus on carbon trading has intensified interest in 'Blue Carbon'-carbon sequestered by coastal vegetated ecosystems, particularly seagrasses. Most information on seagrass carbon storage is derived from studies of a single species, Posidonia oceanica, from the Mediterranean Sea. We surveyed 17 Australian seagrass habitats to assess the variability in their sedimentary organic carbon (C org stocks. The habitats encompassed 10 species, in mono-specific or mixed meadows, depositional to exposed habitats and temperate to tropical habitats. There was an 18-fold difference in the Corg stock (1.09-20.14 mg C org cm(-3 for a temperate Posidonia sinuosa and a temperate, estuarine P. australis meadow, respectively. Integrated over the top 25 cm of sediment, this equated to an areal stock of 262-4833 g C org m(-2. For some species, there was an effect of water depth on the C org stocks, with greater stocks in deeper sites; no differences were found among sub-tidal and inter-tidal habitats. The estimated carbon storage in Australian seagrass ecosystems, taking into account inter-habitat variability, was 155 Mt. At a 2014-15 fixed carbon price of A$25.40 t(-1 and an estimated market price of $35 t(-1 in 2020, the C org stock in the top 25 cm of seagrass habitats has a potential value of $AUD 3.9-5.4 bill. The estimates of annual C org accumulation by Australian seagrasses ranged from 0.093 to 6.15 Mt, with a most probable estimate of 0.93 Mt y(-1 (10.1 t. km(-2 y(-1. These estimates, while large, were one-third of those that would be calculated if inter-habitat variability in carbon stocks were not taken into account. We conclude that there is an urgent need for more information on the variability in seagrass carbon stock and accumulation rates, and the factors driving this variability, in order to improve global estimates of seagrass Blue Carbon storage.
Sumnall, Matthew J.; Hill, Ross A.; Hinsley, Shelley A.
2016-01-01
The quantification of forest ecosystems is important for a variety of purposes, including the assessment of wildlife habitat, nutrient cycles, timber yield and fire propagation. This research assesses the estimation of forest structure, composition and deadwood variables from small-footprint airborne lidar data, both discrete return (DR) and full waveform (FW), acquired under leaf-on and leaf-off conditions. The field site, in the New Forest, UK, includes managed plantation and ancient, se...
Directory of Open Access Journals (Sweden)
David Shilane
2013-01-01
Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.
Can, Seda; van de Schoot, Rens|info:eu-repo/dai/nl/304833207; Hox, Joop|info:eu-repo/dai/nl/073351431
2015-01-01
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the
Deckers, Dave L.E.H.; Booij, Martijn J.; Rientjes, T.H.M.; Krol, Martinus S.
2010-01-01
This study attempts to examine if catchment variability favours regionalisation by principles of catchment similarity. Our work combines calibration of a simple conceptual model for multiple objectives and multi-regression analyses to establish a regional model between model sensitive parameters and
Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data
Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus
2005-01-01
The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...
Toward ambulatory balance assessment: Estimating variability and stability from short bouts of gait
van Schooten, K.S.; Rispens, S.M.; Elders, P.J.M.; van Dieen, J.H.; Pijnappels, M.A.G.M.
2014-01-01
Stride-to-stride variability and local dynamic stability of gait kinematics are promising measures to identify individuals at increased risk of falling. This study aimed to explore the feasibility of using these metrics in clinical practice and ambulatory assessment, where only a small number of
Estimation of storm runoff loads based on rainfall-related variables ...
African Journals Online (AJOL)
2004-11-19
Nov 19, 2004 ... ... rainfall-related variables and power law models – Case study in Alexandra ... and appropriate technology for treating runoff and grey-water. To achieve this ... schools, and other open spaces take up 20% of the area. If the.
Year-round estimation of soil moisture content using temporally variable soil hydraulic parameters
Czech Academy of Sciences Publication Activity Database
Šípek, Václav; Tesař, Miroslav
2017-01-01
Roč. 31, č. 6 (2017), s. 1438-1452 ISSN 0885-6087 R&D Projects: GA ČR GA16-05665S Institutional support: RVO:67985874 Keywords : hydrological modelling * pore-size distribution * saturated hydraulic conductivity * seasonal variability * soil hydraulic parameters * soil moisture Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.014, year: 2016
Using Derivative Estimates to Describe Intraindividual Variability at Multiple Time Scales
Deboeck, Pascal R.; Montpetit, Mignon A.; Bergeman, C. S.; Boker, Steven M.
2009-01-01
The study of intraindividual variability is central to the study of individuals in psychology. Previous research has related the variance observed in repeated measurements (time series) of individuals to traitlike measures that are logically related. Intraindividual measures, such as intraindividual standard deviation or the coefficient of…
Kisi, Ozgur; Shiri, Jalal
2012-06-01
Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
International Nuclear Information System (INIS)
Ahmed, Fayez Shakil; Laghrouche, Salah; Mehmood, Adeel; El Bagdouri, Mohammed
2014-01-01
Highlights: • Estimation of aerodynamic force on variable turbine geometry vanes and actuator. • Method based on exhaust gas flow modeling. • Simulation tool for integration of aerodynamic force in automotive simulation software. - Abstract: This paper provides a reliable tool for simulating the effects of exhaust gas flow through the variable turbine geometry section of a variable geometry turbocharger (VGT), on flow control mechanism. The main objective is to estimate the resistive aerodynamic force exerted by the flow upon the variable geometry vanes and the controlling actuator, in order to improve the control of vane angles. To achieve this, a 1D model of the exhaust flow is developed using Navier–Stokes equations. As the flow characteristics depend upon the volute geometry, impeller blade force and the existing viscous friction, the related source terms (losses) are also included in the model. In order to guarantee stability, an implicit numerical solver has been developed for the resolution of the Navier–Stokes problem. The resulting simulation tool has been validated through comparison with experimentally obtained values of turbine inlet pressure and the aerodynamic force as measured at the actuator shaft. The simulator shows good compliance with experimental results
DEFF Research Database (Denmark)
Rein, Arno; Bauer, S; Dietrich, P
2009-01-01
Monitoring of contaminant concentrations, e.g., for the estimation of mass discharge or contaminant degradation rates. often is based on point measurements at observation wells. In addition to the problem, that point measurements may not be spatially representative. a further complication may ari...
Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data
Keller, Bryan; Chen, Jianshen
2016-01-01
Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…
Stimulus-specific variability in color working memory with delayed estimation.
Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Wilson, Colin; Flombaum, Jonathan I
2014-04-08
Working memory for color has been the central focus in an ongoing debate concerning the structure and limits of visual working memory. Within this area, the delayed estimation task has played a key role. An implicit assumption in color working memory research generally, and delayed estimation in particular, is that the fidelity of memory does not depend on color value (and, relatedly, that experimental colors have been sampled homogeneously with respect to discriminability). This assumption is reflected in the common practice of collapsing across trials with different target colors when estimating memory precision and other model parameters. Here we investigated whether or not this assumption is secure. To do so, we conducted delayed estimation experiments following standard practice with a memory load of one. We discovered that different target colors evoked response distributions that differed widely in dispersion and that these stimulus-specific response properties were correlated across observers. Subsequent experiments demonstrated that stimulus-specific responses persist under higher memory loads and that at least part of the specificity arises in perception and is eventually propagated to working memory. Posthoc stimulus measurement revealed that rendered stimuli differed from nominal stimuli in both chromaticity and luminance. We discuss the implications of these deviations for both our results and those from other working memory studies.
Observation-Driven Estimation of the Spatial Variability of 20th Century Sea Level Rise
Hamlington, B. D.; Burgos, A.; Thompson, P. R.; Landerer, F. W.; Piecuch, C. G.; Adhikari, S.; Caron, L.; Reager, J. T.; Ivins, E. R.
2018-03-01
Over the past two decades, sea level measurements made by satellites have given clear indications of both global and regional sea level rise. Numerous studies have sought to leverage the modern satellite record and available historic sea level data provided by tide gauges to estimate past sea level rise, leading to several estimates for the 20th century trend in global mean sea level in the range between 1 and 2 mm/yr. On regional scales, few attempts have been made to estimate trends over the same time period. This is due largely to the inhomogeneity and quality of the tide gauge network through the 20th century, which render commonly used reconstruction techniques inadequate. Here, a new approach is adopted, integrating data from a select set of tide gauges with prior estimates of spatial structure based on historical sea level forcing information from the major contributing processes over the past century. The resulting map of 20th century regional sea level rise is optimized to agree with the tide gauge-measured trends, and provides an indication of the likely contributions of different sources to regional patterns. Of equal importance, this study demonstrates the sensitivities of this regional trend map to current knowledge and uncertainty of the contributing processes.
Reinhard, S.; Lovell, C.A.K.; Thijssen, G.J.
2000-01-01
The objective of this paper is to estimate comprehensive environmental efficiency measures for Dutch dairy farms. The environmental efficiency scores are based on the nitrogen surplus, phosphate surplus and the total (direct and indirect) energy use of an unbalanced panel of dairy farms. We define
Interannual variability of carbon monoxide emission estimates over South America from 2006 to 2010
Hooghiemstra, P.B.; Krol, M.C.; Leeuwen, van T.T.; Werf, van der G.R.; Novelli, P.C.; Deeter, M.N.; Aben, I.; Rockmann, T.
2012-01-01
We present the first inverse modeling study to estimate CO emissions constrained by both surface and satellite observations. Our 4D-Var system assimilates National Oceanic and Atmospheric Administration Earth System Research Laboratory (NOAA/ESRL) Global Monitoring Division (GMD) surface and
Estimating down dead wood from FIA forest inventory variables in Maine
David C. Chojnacky; Linda S. Heath
2002-01-01
Down deadwood (DDW) is a carbon component important in the function and structure of forest ecosystems, but estimating DDW is problematic because these data are not widely available in forest inventory databases. However, DDW data were collected on USDA Forest Service Forest Inventory and Analysis (FIA) plots during Maine's 1995 inventory. This study examines ways...
Genetic variability and heritability estimates of some polygenic traits in upland cotton
International Nuclear Information System (INIS)
Baloch, M.J.
2004-01-01
Plant breeders are more interested in genetic variance rather than phenotypic variance because it is amenable to selection and bring further improvement in the character. Twenty-eight F/sub 2/ progenies were tested in two environments so as to predict genetic variances, heritability estimates and genetic gains. Mean squares for locations were significant for all the five traits suggesting that genotypes performed differently under varying environments. Genetic variances, in most cases, however, were about equal to that of phenotypic variances consequently giving high heritability estimates and significant genetic gains. The broad sense heritability estimates were; 94.2, 92.9, 33.6, 81.9 and 86.9% and genetic gains were; 30.19, 10.55,0.20,0.89 and 1.76 in seed cotton yield, bolls per plant, lint %, fibre length and fibre uniformity ratio, respectively. Substantial genetic variances and high heritability estimates implied that these characters could be improved through selection from segregating populations. (author)
Directory of Open Access Journals (Sweden)
Xin Lu
2018-03-01
Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Zhonggang, Liang; Hong, Yan
2006-10-01
A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.
Kärcher, Bernd; Burkhardt, Ulrike; Ponater, Michael; Frömming, Christine
2010-11-09
Estimates of the global radiative forcing by line-shaped contrails differ mainly due to the large uncertainty in contrail optical depth. Most contrails are optically thin so that their radiative forcing is roughly proportional to their optical depth and increases with contrail coverage. In recent assessments, the best estimate of mean contrail radiative forcing was significantly reduced, because global climate model simulations pointed at lower optical depth values than earlier studies. We revise these estimates by comparing the probability distribution of contrail optical depth diagnosed with a climate model with the distribution derived from a microphysical, cloud-scale model constrained by satellite observations over the United States. By assuming that the optical depth distribution from the cloud model is more realistic than that from the climate model, and by taking the difference between the observed and simulated optical depth over the United States as globally representative, we quantify uncertainties in the climate model's diagnostic contrail parameterization. Revising the climate model results accordingly increases the global mean radiative forcing estimate for line-shaped contrails by a factor of 3.3, from 3.5 mW/m(2) to 11.6 mW/m(2) for the year 1992. Furthermore, the satellite observations and the cloud model point at higher global mean optical depth of detectable contrails than often assumed in radiative transfer (off-line) studies. Therefore, we correct estimates of contrail radiative forcing from off-line studies as well. We suggest that the global net radiative forcing of line-shaped persistent contrails is in the range 8-20 mW/m(2) for the air traffic in the year 2000.
Estimation of intra-operator variability in perfusion parameter measurements using DCE-US.
Gauthier, Marianne; Leguerney, Ingrid; Thalmensi, Jessie; Chebil, Mohamed; Parisot, Sarah; Peronneau, Pierre; Roche, Alain; Lassau, Nathalie
2011-03-28
To investigate intra-operator variability of semi-quantitative perfusion parameters using dynamic contrast-enhanced ultrasonography (DCE-US), following bolus injections of SonoVue(®). The in vitro experiments were conducted using three in-house sets up based on pumping a fluid through a phantom placed in a water tank. In the in vivo experiments, B16F10 melanoma cells were xenografted to five nude mice. Both in vitro and in vivo, images were acquired following bolus injections of the ultrasound contrast agent SonoVue(®) (Bracco, Milan, Italy) and using a Toshiba Aplio(®) ultrasound scanner connected to a 2.9-5.8 MHz linear transducer (PZT, PLT 604AT probe) (Toshiba, Japan) allowing harmonic imaging ("Vascular Recognition Imaging") involving linear raw data. A mathematical model based on the dye-dilution theory was developed by the Gustave Roussy Institute, Villejuif, France and used to evaluate seven perfusion parameters from time-intensity curves. Intra-operator variability analyses were based on determining perfusion parameter coefficients of variation (CV). In vitro, different volumes of SonoVue(®) were tested with the three phantoms: intra-operator variability was found to range from 2.33% to 23.72%. In vivo, experiments were performed on tumor tissues and perfusion parameters exhibited values ranging from 1.48% to 29.97%. In addition, the area under the curve (AUC) and the area under the wash-out (AUWO) were two of the parameters of great interest since throughout in vitro and in vivo experiments their variability was lower than 15.79%. AUC and AUWO appear to be the most reliable parameters for assessing tumor perfusion using DCE-US as they exhibited the lowest CV values.
Ward, N. K.; Maureira, F.; Yourek, M. A.; Brooks, E. S.; Stockle, C. O.
2014-12-01
The current use of synthetic nitrogen fertilizers in agriculture has many negative environmental and economic costs, necessitating improved nitrogen management. In the highly heterogeneous landscape of the Palouse region in eastern Washington and northern Idaho, crop nitrogen needs vary widely within a field. Site-specific nitrogen management is a promising strategy to reduce excess nitrogen lost to the environment while maintaining current yields by matching crop needs with inputs. This study used in-situ hydrologic, nutrient, and crop yield data from a heavily instrumented field site in the high precipitation zone of the wheat-producing Palouse region to assess the performance of the MicroBasin model. MicroBasin is a high-resolution watershed-scale ecohydrologic model with nutrient cycling and cropping algorithms based on the CropSyst model. Detailed soil mapping conducted at the site was used to parameterize the model and the model outputs were evaluated with observed measurements. The calibrated MicroBasin model was then used to evaluate the impact of various nitrogen management strategies on crop yield and nitrate losses. The strategies include uniform application as well as delineating the field into multiple zones of varying nitrogen fertilizer rates to optimize nitrogen use efficiency. We present how coupled modeling and in-situ data sets can inform agricultural management and policy to encourage improved nitrogen management.
Blonda, Palma; Maso, Joan; Bombelli, Antonio; Plag, Hans Peter; McCallum, Ian; Serral, Ivette; Nativi, Stefano Stefano
2016-04-01
ConnectinGEO (Coordinating an Observation Network of Networks EnCompassing saTellite and IN-situ to fill the Gaps in European Observations" is an H2020 Coordination and Support Action with the primary goal of linking existing Earth Observation networks with science and technology (S&T) communities, the industry sector, the Group on Earth Observations (GEO), and Copernicus. The project will end in February 2017. Essential Variables (EVs) are defined by ConnectinGEO as "a minimal set of variables that determine the system's state and developments, are crucial for predicting system developments, and allow us to define metrics that measure the trajectory of the system". . Specific application-dependent characteristics, such as spatial and temporal resolution of observations and data quality thresholds, are not generally included in the EV definition. This definition and the present status of EV developments in different societal benefit areas was elaborated at the ConnectinGEO workshop "Towards a sustainability process for GEOSS Essential Variables (EVs)," which was held in Bari on June 11-12, 2015 (http://www.gstss.org/2015_Bari/). Presentations and reports contributed by a wide range of communities provided important inputs from different sectors for assessing the status of the EV development. In most thematic areas, the development of sets of EVs is a community process leading to an agreement on what is essential for the goals of the community. While there are many differences across the communities in the details of the criteria, methodologies and processes used to develop sets of EVs, there is also a considerable common core across the communities, particularly those with a more advanced discussion. In particular, there is some level of overlap in different topics (e.g., Climate and Water), and there is a potential to develop an integrated set of EVs common to several thematic areas as well as specific ones that satisfy only one community. The thematic areas with
Carbon Dioxide Evasion from Boreal Lakes: Drivers, Variability and Revised Global Estimate
Hastie, A. T.; Lauerwald, R.; Weyhenmeyer, G. A.; Sobek, S.; Verpoorter, C.; Regnier, P. A. G.
2016-12-01
Carbon dioxide evasion (FCO2) from lakes and reservoirs is established as an important component of the global carbon (C) cycle, a fact reflected by the inclusion of these waterbodies in the most recent IPCC assessment report. In this study we developed a statistical model driven by environmental geodata, to predict CO2 partial pressure (pCO2) in boreal lakes, and to create the first high resolution map (0.5°) of boreal (50°- 70°) lake pCO2. The resulting map of pCO2 was combined with lake area (lakes >0.01km2) from the recently developed GLOWABO database (Verpoorter et al., 2014) and estimates of gas transfer velocity k, to produce the first high resolution map of boreal lake FCO2. Before training our model, the geodata as well as approximately 27,000 samples of `open water' (excluding periods of ice cover) pCO2 from the boreal region, were gridded at 0.5° resolution and log transformed where necessary. A multilinear regression was used to derive a prediction equation for log10 pCO2 as a function of log10 lake area, net primary productivity (NPP), precipitation, wind speed and soil pH (r2= 0.66), and then applied in ArcGIS to build the map of pCO2. After validation, the map of boreal lake pCO2 was used to derive a map of boreal lake FCO2. For the boreal region we estimate an average, lake area weighted, pCO2 of 930 μatm and FCO2 of 170 (121-243) Tg C yr-1. Our estimate of FCO2 will soon be updated with the incorporation of the smallest lakes (<0.01km2). Despite the current exclusion of the smallest lakes, our estimate is higher than the highest previous estimate of approximately 110 Tg C yr-1 (Aufdenkampe et al, 2011). Moreover, our empirical approach driven by environmental geodata can be used as the basis for estimating future FCO2 from boreal lakes, and their sensitivity to climate change.
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gan, L.; Yang, F.; Shi, Y. F.; He, H. L.
2017-11-01
Many occasions related to batteries demand to know how much continuous and instantaneous power can batteries provide such as the rapidly developing electric vehicles. As the large-scale applications of lithium-ion batteries, lithium-ion batteries are used to be our research object. Many experiments are designed to get the lithium-ion battery parameters to ensure the relevance and reliability of the estimation. To evaluate the continuous and instantaneous load capability of a battery called state-of-function (SOF), this paper proposes a fuzzy logic algorithm based on battery state-of-charge(SOC), state-of-health(SOH) and C-rate parameters. Simulation and experimental results indicate that the proposed approach is suitable for battery SOF estimation.
Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric
2017-01-01
This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...
Remaining useful life estimation in heterogeneous fleets working under variable operating conditions
International Nuclear Information System (INIS)
Al-Dahidi, Sameer; Di Maio, Francesco; Baraldi, Piero; Zio, Enrico
2016-01-01
The availability of condition monitoring data for large fleets of similar equipment motivates the development of data-driven prognostic approaches that capitalize on the information contained in such data to estimate equipment Remaining Useful Life (RUL). A main difficulty is that the fleet of equipment typically experiences different operating conditions, which influence both the condition monitoring data and the degradation processes that physically determine the RUL. We propose an approach for RUL estimation from heterogeneous fleet data based on three phases: firstly, the degradation levels (states) of an homogeneous discrete-time finite-state semi-markov model are identified by resorting to an unsupervised ensemble clustering approach. Then, the parameters of the discrete Weibull distributions describing the transitions among the states and their uncertainties are inferred by resorting to the Maximum Likelihood Estimation (MLE) method and to the Fisher Information Matrix (FIM), respectively. Finally, the inferred degradation model is used to estimate the RUL of fleet equipment by direct Monte Carlo (MC) simulation. The proposed approach is applied to two case studies regarding heterogeneous fleets of aluminium electrolytic capacitors and turbofan engines. Results show the effectiveness of the proposed approach in predicting the RUL and its superiority compared to a fuzzy similarity-based approach of literature. - Highlights: • The prediction of the remaining useful life for heterogeneous fleets is addressed. • A data-driven prognostics approach based on a Markov model is proposed. • The proposed approach is applied to two different heterogeneous fleets. • The results are compared with those obtained by a fuzzy similarity-based approach.
Sutton, Tracey; Hopkins, Thomas; Remsen, Andrew; Burghart, Scott
2001-01-01
Sampling was conducted on the west Florida continental shelf ecosystem modeling site to estimate zooplankton grazing impact on primary production. Samples were collected with the high-resolution sampler, a towed array bearing electronic and optical sensors operating in tandem with a paired net/bottle verification system. A close biological-physical coupling was observed, with three main plankton communities: 1. a high-density inshore community dominated by larvaceans coincident with a salinity gradient; 2. a low-density offshore community dominated by small calanoid copepods coincident with the warm mixed layer; and 3. a high-density offshore community dominated by small poecilostomatoid and cyclopoid copepods and ostracods coincident with cooler, sub-pycnocline oceanic water. Both high-density communities were associated with relatively turbid water. Applying available grazing rates from the literature to our abundance data, grazing pressure mirrored the above bio-physical pattern, with the offshore sub-pycnocline community contributing ˜65% of grazing pressure despite representing only 19% of the total volume of the transect. This suggests that grazing pressure is highly localized, emphasizing the importance of high-resolution sampling to better understand plankton dynamics. A comparison of our grazing rate estimates with primary production estimates suggests that mesozooplankton do not control the fate of phytoplankton over much of the area studied (<5% grazing of daily primary production), but "hot spots" (˜25-50% grazing) do occur which may have an effect on floral composition.
Estimation of the temperature spatial variability in confined spaces based on thermal imaging
Augustyn, Grzegorz; Jurasz, Jakub; Jurczyk, Krzysztof; Korbiel, Tomasz; Mikulik, Jerzy; Pawlik, Marcin; Rumin, Rafał
2017-11-01
In developed countries the salaries of office workers are several times higher than the total cost of maintaining and operating the building. Therefore even a small improvement in human work productivity and performance as a result of enhancing the quality of their work environment may lead to a meaningful economic benefits. The air temperature is the most commonly used indicator in assessing the indoor environment quality. What is more, it is well known that thermal comfort has the biggest impact on employees performance and their ability to work efficiently. In majority of office buildings, indoor temperature is managed by heating, ventilation and air conditioning (HVAC) appliances. However the way how they are currently managed and controlled leads to the nonhomogeneous distribution of temperature in certain space. An approach to determining the spatial variability of temperature in confined spaces was introduced based on thermal imaging temperature measurements. The conducted research and obtained results enabled positive verification of the method and creation of surface plot illustrating the temperature variability.
Estimation of the temperature spatial variability in confined spaces based on thermal imaging
Directory of Open Access Journals (Sweden)
Augustyn Grzegorz
2017-01-01
Full Text Available In developed countries the salaries of office workers are several times higher than the total cost of maintaining and operating the building. Therefore even a small improvement in human work productivity and performance as a result of enhancing the quality of their work environment may lead to a meaningful economic benefits. The air temperature is the most commonly used indicator in assessing the indoor environment quality. What is more, it is well known that thermal comfort has the biggest impact on employees performance and their ability to work efficiently. In majority of office buildings, indoor temperature is managed by heating, ventilation and air conditioning (HVAC appliances. However the way how they are currently managed and controlled leads to the nonhomogeneous distribution of temperature in certain space. An approach to determining the spatial variability of temperature in confined spaces was introduced based on thermal imaging temperature measurements. The conducted research and obtained results enabled positive verification of the method and creation of surface plot illustrating the temperature variability.
Aquatic pathway variables affecting the estimation of dose commitment from uranium mill tailings
International Nuclear Information System (INIS)
Lush, D.L.; Snodgrass, W.J.; McKee, P.
1982-01-01
As one of a series of studies being carried out for the Atomic Energy Control Board of Canada, the environmental variables affecting population dose commitment and critical group dose rates from aquatic pathways were investigated. A model was developed to follow uranium and natural thorium decay series radionuclides through aquatic pathways leading both to long-term sediment sinks and to man. Pathways leading to man result in both a population dose commitment and a critical group dose rate. The key variables affecting population dose commitment are suspended particulate concentrations in the receiving aquatic systems, the settling velocities of these particulates and the solid-aqueous phase distribution coefficient associated with each radionuclide. Of secondary importance to population dose commitment are the rate at which radionuclides enter the receiving waters and the value of the water to food transfer coefficients that are used in the model. For the critical group dose rate, the rate at which the radionuclides leave the tailings, the water to food transfer coefficients, the rate of water and fish consumption and the dose conversion factors for 210 Pb and 210 Po are of secondary importance (author)
Hamdan; Saputra, H.; Mirwandhono, E.; Hasnudi; Sembiring, I.; Umar, S.; Ginting, N.; Alwiyah
2018-02-01
The purpose of this research was to look the genetic distance and factors distinguishing variable betwen types of goats in North Sumatera. This research have been conducted in PayaBakung, Hamparan Perak and Klambir Lima village, Deli Serdang district, Batu Binumbun, Aritonang, HutaGinjang village, Muarasubdistrict, North Tapanuli district and ParbabaDolok, Siopat Sosor, Sinabulan village, Ronggur Nihuta Pangururan village, Sitonggi-tonggi village in the subdistrict RonggurNihuta, Samosir district of the month of July 2016. The data was analyzed using descriptive, discriminants, canonical, Principal Component Analysis, Distance genetic and Tree Phylogenetic. The result showed that the nearest genetic distance goat found in Kacang and Samosir (1.973), and the farthest genetic distnace find in Samosir and Muara (8.671). The variables made it difference was goat race Base Rim Horn (0.856) and Long Horn (0.878). Genetic distance values most far between Muaragoat with Samosir goat was (8.671). The conclude that the crossing superior result, must be cross between two goat types with value genetics most distance. It will have a better chance heterosis in cross result.
Directory of Open Access Journals (Sweden)
Paula Furtună
2013-03-01
Full Text Available Climatic changes are representing one of the major challenges of our century, these being forcasted according to climate scenarios and models, which represent plausible and concrete images of future climatic conditions. The results of climate models comparison regarding future water resources and temperature regime trend can become a useful instrument for decision makers in choosing the most effective decisions regarding economic, social and ecologic levels. The aim of this article is the analysis of temperature and pluviometric variability at the closest grid point to Cluj-Napoca, based on data provided by six different regional climate models (RCMs. Analysed on 30 year periods (2001-2030,2031-2060 and 2061-2090, the mean temperature has an ascending general trend, with great varability between periods. The precipitation expressed trough percentage deviation shows a descending general trend, which is more emphazied during 2031-2060 and 2061-2090.
Graham, Wendy D.; Neff, Christina R.
1994-05-01
The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.
Badra, Mohammad; Mehio-Sibai, Abla; Zeki Al-Hazzouri, Adina; Abou Naja, Hala; Baliki, Ghassan; Salamoun, Mariana; Afeiche, Nadim; Baddoura, Omar; Bulos, Suhayl; Haidar, Rachid; Lakkis, Suhayl; Musharrafieh, Ramzi; Nsouli, Afif; Taha, Assaad; Tayim, Ahmad; El-Hajj Fuleihan, Ghada
2009-01-01
Bone mineral density (BMD) and fracture incidence vary greatly worldwide. The data, if any, on clinical and densitometric characteristics of patients with hip fractures from the Middle East are scarce. The objective of the study was to define risk estimates from clinical and densitometric variables and the impact of database selection on such estimates. Clinical and densitometric information were obtained in 60 hip fracture patients and 90 controls. Hip fracture subjects were 74 yr (9.4) old, were significantly taller, lighter, and more likely to be taking anxiolytics and sleeping pills than controls. National Health and Nutrition Examination Survey (NHANES) database selection resulted in a higher sensitivity and almost equal specificity in identifying patients with a hip fracture compared with the Lebanese database. The odds ratio (OR) and its confidence interval (CI) for hip fracture per standard deviation (SD) decrease in total hip BMD was 2.1 (1.45-3.05) with the NHANES database, and 2.11 (1.36-2.37) when adjusted for age and body mass index (BMI). Risk estimates were higher in male compared with female subjects. In Lebanese subjects, BMD- and BMI-derived hip fracture risk estimates are comparable to western standards. The study validates the universal use of the NHANES database, and the applicability of BMD- and BMI-derived risk fracture estimates in the World Health Organization (WHO) global fracture risk model, to the Lebanese.
Using latent variable approach to estimate China's economy-wide energy rebound effect over 1954–2010
International Nuclear Information System (INIS)
Shao, Shuai; Huang, Tao; Yang, Lili
2014-01-01
The energy rebound effect has been a significant issue in China, which is undergoing economic transition, since it reflects the effectiveness of energy-saving policy relying on improved energy efficiency. Based on the IPAT equation and Brookes' explanation of the rebound effect, this paper develops an alternative estimation model of the rebound effect. By using the estimation model and latent variable approach, which is achieved through a time-varying coefficient state space model, we estimate China's economy-wide energy rebound effect over 1954–2010. The results show that the rebound effect evidently exists in China as a result of the annual average of 39.73% over 1954–2010. Before and after the implementation of China's reform and opening-up policy in 1978, the rebound effects are 47.24% and 37.32%, with a strong fluctuation and a circuitously downward trend, respectively, indicating that a stable political environment and the development of market economy system facilitate the effectiveness of energy-saving policy. Although the energy-saving effect of improving energy efficiency has been partly realised, there remains a large energy-saving potential in China. - Highlights: • We present an improved estimation methodology of economy-wide energy rebound effect. • We use the latent variable approach to estimate China's economy-wide rebound effect. • The rebound exists in China and varies before and after reform and opening-up. • After 1978, the average rebound is 37.32% with a circuitously downward trend. • Traditional Solow remainder method underestimates the rebound in most cases
International Nuclear Information System (INIS)
Bogen, K.T.; Conrado, C.L.; Robison, W.L.
1997-01-01
Uncertainty and interindividual variability were assessed in estimated doses for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island would be treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Doses were estimated for ingested 137 Cs and 90 Sr, external gamma-exposure, and inhalation+ingestion of 241 Am + 239+240 Pu. Two dietary scenarios were considered: imported foods are available (IA); imported foods are unavailable with only local foods consumed (IUA). After ∼5 y of Bikini residence under either IA or IUA assumptions, upper and lower 95% confidence limits on interindividual variability in calculated dose were estimated to lie within a ∼threefold factor of its in population-average value; upper and lower 95% confidence limits on uncertainty in calculated dose were estimated to lie within a ∼twofold factor of its expected value. For reference, the expected values of population-average dose at age 70 y were estimated to be 16 and 52 mSv under IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively. Under the most likely dietary scenario, involving access to imported foods, this analysis indicates that it is most likely that no additional cancer fatalities (above those normally expected) would arise from the increased radiation exposures considered. 33 refs., 4 figs., 4 tabs
Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?*
Kos, Maciej; Li, Xuan; Khaghani-Far, Iman; Gordon, Christine M.; Pavel, Misha; Jimison Member, Holly B.
2018-01-01
A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data. PMID:29060185
Can accelerometry data improve estimates of heart rate variability from wrist pulse PPG sensors?
Kos, Maciej; Xuan Li; Khaghani-Far, Iman; Gordon, Christine M; Pavel, Misha; Jimison, Holly B
2017-07-01
A key prerequisite for precision medicine is the ability to assess metrics of human behavior objectively, unobtrusively and continuously. This capability serves as a framework for the optimization of tailored, just-in-time precision health interventions. Mobile unobtrusive physiological sensors, an important prerequisite for realizing this vision, show promise in implementing this quality of physiological data collection. However, first we must trust the collected data. In this paper, we present a novel approach to improving heart rate estimates from wrist pulse photoplethysmography (PPG) sensors. We also discuss the impact of sensor movement on the veracity of collected heart rate data.
Ecological Variability and Carbon Stock Estimates of Mangrove Ecosystems in Northwestern Madagascar
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2014-01-01
Full Text Available Mangroves are found throughout the tropics, providing critical ecosystem goods and services to coastal communities and supporting rich biodiversity. Despite their value, world-wide, mangroves are being rapidly degraded and deforested. Madagascar contains approximately 2% of the world’s mangroves, >20% of which has been deforested since 1990 from increased extraction for charcoal and timber and conversion to small to large-scale agriculture and aquaculture. Loss is particularly prominent in the northwestern Ambaro and Ambanja bays. Here, we focus on Ambaro and Ambanja bays, presenting dynamics calculated using United States Geological Survey (USGS national-level mangrove maps and the first localized satellite imagery derived map of dominant land-cover types. The analysis of USGS data indicated a loss of 7659 ha (23.7% and a gain of 995 ha (3.1% from 1990–2010. Contemporary mapping results were 93.4% accurate overall (Kappa 0.9, with producer’s and user’s accuracies ≥85%. Classification results allowed partitioning mangroves in to ecologically meaningful, spectrally distinct strata, wherein field measurements facilitated estimating the first total carbon stocks for mangroves in Madagascar. Estimates suggest that higher stature closed-canopy mangroves have average total vegetation carbon values of 146.8 Mg/ha (±10.2 and soil organic carbon of 446.2 (±36.9, supporting a growing body of studies that mangroves are amongst the most carbon-dense tropical forests.
Directory of Open Access Journals (Sweden)
Liu X
2012-10-01
Full Text Available Xun Liu,1,2,* Mu-hua Cheng,3,* Cheng-gang Shi,1 Cheng Wang,1 Cai-lian Cheng,1 Jin-xia Chen,1 Hua Tang,1 Zhu-jiang Chen,1 Zeng-chun Ye,1 Tan-qi Lou11Division of Nephrology, Department of Internal Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China; 2College of Biology Engineering, South China University of Technology, Guangzhou, China; 3Department of Nuclear Medicine, The Third Affiliated Hospital of Sun Yet-sun University, Guangzhou, China *These authors contributed equally to this paperBackground: Chronic kidney disease (CKD is recognized worldwide as a public health problem, and its prevalence increases as the population ages. However, the applicability of formulas for estimating the glomerular filtration rate (GFR based on serum creatinine (SC levels in elderly Chinese patients with CKD is limited.Materials and methods: Based on values obtained with the technetium-99m diethylenetriaminepentaacetic acid (99mTc-DTPA renal dynamic imaging method, 319 elderly Chinese patients with CKD were enrolled in this study. Serum creatinine was determined by the enzymatic method. The GFR was estimated using the Cockroft–Gault (CG equation, the Modification of Diet in Renal Disease (MDRD equations, the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI equation, the Jelliffe-1973 equation, and the Hull equation.Results: The median of difference ranged from −0.3–4.3 mL/min/1.73 m2. The interquartile range (IQR of differences ranged from 13.9–17.6 mL/min/1.73 m2. Accuracy with a deviation less than 15% ranged from 27.6%–32.9%. Accuracy with a deviation less than 30% ranged from 53.6%–57.7%. Accuracy with a deviation less than 50% ranged from 74.9%–81.5%. None of the equations had accuracy up to the 70% level with a deviation less than 30% from the standard glomerular filtration rate (sGFR. Bland–Altman analysis demonstrated that the mean difference ranged from −3.0–2.4 mL/min/1.73 m2. However, the
Fencl, Martin; Jörg, Rieckermann; Vojtěch, Bareš
2015-04-01
Commercial microwave links (MWL) are point-to-point radio systems which are used in backhaul networks of cellular operators. For several years, they have been suggested as rainfall sensors complementary to rain gauges and weather radars, because, first, they operate at frequencies where rain drops represent significant source of attenuation and, second, cellular networks almost completely cover urban and rural areas. Usually, path-average rain rates along a MWL are retrieved from the rain-induced attenuation of received MWL signals with a simple model based on a power law relationship. The model is often parameterized based on the characteristics of a particular MWL, such as frequency, polarization and the drop size distribution (DSD) along the MWL. As information on the DSD is usually not available in operational conditions, the model parameters are usually considered constant. Unfortunately, this introduces bias into rainfall estimates from MWL. In this investigation, we propose a generic method to eliminate this bias in MWL rainfall estimates. Specifically, we search for attenuation statistics which makes it possible to classify rain events into distinct groups for which same power-law parameters can be used. The theoretical attenuation used in the analysis is calculated from DSD data using T-Matrix method. We test the validity of our approach on observations from a dedicated field experiment in Dübendorf (CH) with a 1.85-km long commercial dual-polarized microwave link transmitting at a frequency of 38 GHz, an autonomous network of 5 optical distrometers and 3 rain gauges distributed along the path of the MWL. The data is recorded at a high temporal resolution of up to 30s. It is further tested on data from an experimental catchment in Prague (CZ), where 14 MWLs, operating at 26, 32 and 38 GHz frequencies, and reference rainfall from three RGs is recorded every minute. Our results suggest that, for our purpose, rain events can be nicely characterized based on
Directory of Open Access Journals (Sweden)
V. MADERICH
2015-07-01
Full Text Available A chain of simple linked models is used to simulate the seasonal and interannual variability of the Turkish Straits System. This chain includes two-layer hydraulic models of the Bosphorus and Dardanelles straits simulating the exchange in terms of level and density difference along each strait, and a one-dimensional area averaged layered model of the Marmara Sea. The chain of models is complemented also by the similar layered model of the Black Sea proper and by a one-layer Azov Sea model with the Kerch Strait. This linked chain of models is used to study the seasonal and interannual variability of the system in the period 1970-2009. The salinity of the Black Sea water flowing into the Aegean Sea increases by approximately 1.7 times through entrainment from the lower layer. The flow entering into the lower layer of the Dardanelles Strait from the Aegean Sea is reduced by nearly 80% when it reaches the Black Sea. In the seasonal scale, a maximal transport in the upper layer and minimal transport in the bottom layer are during winter/spring for the Bosphorus and in spring for the Dardanelles Strait, whereas minimal transport in upper layer and maximal undercurrent are during the summer for the Bosphorus Strait and autumn for the Dardanelles Strait. The increase of freshwater flux into the Black Sea in interannual time scales (41 m3s-1 per year is accompanied by a more than twofold growth of the Dardanelles outflow to the North Aegean (102 m3s-1 per year.
Estimating the Relative Sociolinguistic Salience of Segmental Variables in a Dialect Boundary Zone
Llamas, Carmen; Watt, Dominic; MacFarlane, Andrew E.
2016-01-01
One way of evaluating the salience of a linguistic feature is by assessing the extent to which listeners associate the feature with a social category such as a particular socioeconomic class, gender, or nationality. Such ‘top–down’ associations will inevitably differ somewhat from listener to listener, as a linguistic feature – the pronunciation of a vowel or consonant, for instance – can evoke multiple social category associations, depending upon the dialect in which the feature is embedded and the context in which it is heard. In a given speech community it is reasonable to expect, as a consequence of the salience of the linguistic form in question, a certain level of intersubjective agreement on social category associations. Two metrics we can use to quantify the salience of a linguistic feature are (a) the speed with which the association is made, and (b) the degree to which members of a speech community appear to share the association. Through the use of a new technique, designed as an adaptation of the Implicit Association Test, this paper examines levels of agreement among 40 informants from the Scottish/English border region with respect to the associations they make between four key phonetic variables and the social categories of ‘Scotland’ and ‘England.’ Our findings reveal that the participants exhibit differential agreement patterns across the set of phonetic variables, and that listeners’ responses vary in line with whether participants are members of the Scottish or the English listener groups. These results demonstrate the importance of community-level agreement with respect to the associations that listeners make between social categories and linguistic forms, and as a means of ranking the forms’ relative salience. PMID:27574511
Manfron, Giacinto; Delmotte, Sylvestre; Busetto, Lorenzo; Hossard, Laure; Ranghetti, Luigi; Brivio, Pietro Alessandro; Boschetti, Mirco
2017-05-01
Crop simulation models are commonly used to forecast the performance of cropping systems under different hypotheses of change. Their use on a regional scale is generally constrained, however, by a lack of information on the spatial and temporal variability of environment-related input variables (e.g., soil) and agricultural practices (e.g., sowing dates) that influence crop yields. Satellite remote sensing data can shed light on such variability by providing timely information on crop dynamics and conditions over large areas. This paper proposes a method for analyzing time series of MODIS satellite data in order to estimate the inter-annual variability of winter wheat sowing dates. A rule-based method was developed to automatically identify a reliable sample of winter wheat field time series, and to infer the corresponding sowing dates. The method was designed for a case study in the Camargue region (France), where winter wheat is characterized by vernalization, as in other temperate regions. The detection criteria were chosen on the grounds of agronomic expertise and by analyzing high-confidence time-series vegetation index profiles for winter wheat. This automatic method identified the target crop on more than 56% (four-year average) of the cultivated areas, with low commission errors (11%). It also captured the seasonal variability in sowing dates with errors of ±8 and ±16 days in 46% and 66% of cases, respectively. Extending the analysis to the years 2002-2012 showed that sowing in the Camargue was usually done on or around November 1st (±4 days). Comparing inter-annual sowing date variability with the main local agro-climatic drivers showed that the type of preceding crop and the weather conditions during the summer season before the wheat sowing had a prominent role in influencing winter wheat sowing dates.
Internal Variability and Disequilibrium Confound Estimates of Climate Sensitivity From Observations
Marvel, Kate; Pincus, Robert; Schmidt, Gavin A.; Miller, Ron L.
2018-02-01
An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedback in the far future. Here we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2 radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long-term sensitivity.
Variability in estimated runoff in a forested area based on different cartographic data sources
Energy Technology Data Exchange (ETDEWEB)
Fragoso, L.; Quirós, E.; Durán-Barroso, P.
2017-11-01
Aim of study: The goal of this study is to analyse variations in curve number (CN) values produced by different cartographic data sources in a forested watershed, and determine which of them best fit with measured runoff volumes. Area of study: A forested watershed located in western Spain. Material and methods: Four digital cartographic data sources were used to determine the runoff CN in the watershed. Main results: None of the cartographic sources provided all the information necessary to determine properly the CN values. Our proposed methodology, focused on the tree canopy cover, improves the achieved results. Research highlights: The estimation of the CN value in forested areas should be attained as a function of tree canopy cover and new calibrated tables should be implemented in a local scale.
International Nuclear Information System (INIS)
Hsi, C.-L.; Kuo, J.-T.
2008-01-01
Estimating solid residue gross burning rate and heating value burning in a power plant furnace is essential for adequate manipulation to achieve energy conversion optimization and plant performance. A model based on conservation equations of mass and thermal energy is established in this work to calculate the instantaneous gross burning rate and lower heating value of solid residue fired in a combustion chamber. Comparing the model with incineration plant control room data indicates that satisfactory predictions of fuel burning rates and heating values can be obtained by assuming the moisture-to-carbon atomic ratio (f/a) within the typical range from 1.2 to 1.8. Agreement between mass and thermal analysis and the bed-chemistry model is acceptable. The model would be useful for furnace fuel and air control strategy programming to achieve optimum performance in energy conversion and pollutant emission reduction
Estimation of spatial variability of lignite mine dumping ground soil properties using CPTu results
Directory of Open Access Journals (Sweden)
Bagińska Irena
2016-03-01
Full Text Available The paper deals with application of CPTu test results for the probabilistic modeling of dumping grounds. The statistical measures use results from 42 CPT test points located in the lignite mine dumping ground from the region of Central Europe. Both the tip resistance qc as well as local friction fs are tested. Based on the mean values and standard deviations of measured quantities the specific zones in the dumping site profile are distinguished. For three main zones standard deviations of linearly de-trended functions, distributions of normalized de-trended values for qc and fs are examined. Also the vertical scales of fluctuation for both measured quantities are estimated. The obtained result shows that lignite mine dumping site can be successfully described with the Random Field Theory. Additional use of fs values introduces supplementary statistical information.
Directory of Open Access Journals (Sweden)
Laura Fragoso
2017-10-01
Full Text Available Aim of study: The goal of this study is to analyse variations in curve number (CN values produced by different cartographic data sources in a forested watershed, and determine which of them best fit with measured runoff volumes. Area of study: A forested watershed located in western Spain. Material and methods: Four digital cartographic data sources were used to determine the runoff CN in the watershed. Main results: None of the cartographic sources provided all the information necessary to determine properly the CN values. Our proposed methodology, focused on the tree canopy cover, improves the achieved results. Research highlights: The estimation of the CN value in forested areas should be attained as a function of tree canopy cover and new calibrated tables should be implemented in a local scale.
Directory of Open Access Journals (Sweden)
A. Venäläinen
2017-07-01
Full Text Available The bioeconomy has an increasing role to play in climate change mitigation and the sustainable development of national economies. In Finland, a forested country, over 50 % of the current bioeconomy relies on the sustainable management and utilization of forest resources. Wind storms are a major risk that forests are exposed to and high-spatial-resolution analysis of the most vulnerable locations can produce risk assessment of forest management planning. In this paper, we examine the feasibility of the wind multiplier approach for downscaling of maximum wind speed, using 20 m spatial resolution CORINE land-use dataset and high-resolution digital elevation data. A coarse spatial resolution estimate of the 10-year return level of maximum wind speed was obtained from the ERA-Interim reanalyzed data. Using a geospatial re-mapping technique the data were downscaled to 26 meteorological station locations to represent very diverse environments. Applying a comparison, we find that the downscaled 10-year return levels represent 66 % of the observed variation among the stations examined. In addition, the spatial variation in wind-multiplier-downscaled 10-year return level wind was compared with the WAsP model-simulated wind. The heterogeneous test area was situated in northern Finland, and it was found that the major features of the spatial variation were similar, but in some locations, there were relatively large differences. The results indicate that the wind multiplier method offers a pragmatic and computationally feasible tool for identifying at a high spatial resolution those locations with the highest forest wind damage risks. It can also be used to provide the necessary wind climate information for wind damage risk model calculations, thus making it possible to estimate the probability of predicted threshold wind speeds for wind damage and consequently the probability (and amount of wind damage for certain forest stand configurations.
Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán
2012-01-01
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.
Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan
2017-07-24
Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.
International Nuclear Information System (INIS)
Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier
2016-01-01
Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.
DEFF Research Database (Denmark)
Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L
2008-01-01
BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...
Dessler, Andrew E.; Mauritsen, Thorsten; Stevens, Bjorn
2018-04-01
Our climate is constrained by the balance between solar energy absorbed by the Earth and terrestrial energy radiated to space. This energy balance has been widely used to infer equilibrium climate sensitivity (ECS) from observations of 20th-century warming. Such estimates yield lower values than other methods, and these have been influential in pushing down the consensus ECS range in recent assessments. Here we test the method using a 100-member ensemble of the Max Planck Institute Earth System Model (MPI-ESM1.1) simulations of the period 1850-2005 with known forcing. We calculate ECS in each ensemble member using energy balance, yielding values ranging from 2.1 to 3.9 K. The spread in the ensemble is related to the central assumption in the energy budget framework: that global average surface temperature anomalies are indicative of anomalies in outgoing energy (either of terrestrial origin or reflected solar energy). We find that this assumption is not well supported over the historical temperature record in the model ensemble or more recent satellite observations. We find that framing energy balance in terms of 500 hPa tropical temperature better describes the planet's energy balance.
Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.
2009-01-01
Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.
Moore, Julia L; Remais, Justin V
2014-03-01
Developmental models that account for the metabolic effect of temperature variability on poikilotherms, such as degree-day models, have been widely used to study organism emergence, range and development, particularly in agricultural and vector-borne disease contexts. Though simple and easy to use, structural and parametric issues can influence the outputs of such models, often substantially. Because the underlying assumptions and limitations of these models have rarely been considered, this paper reviews the structural, parametric, and experimental issues that arise when using degree-day models, including the implications of particular structural or parametric choices, as well as assumptions that underlie commonly used models. Linear and non-linear developmental functions are compared, as are common methods used to incorporate temperature thresholds and calculate daily degree-days. Substantial differences in predicted emergence time arose when using linear versus non-linear developmental functions to model the emergence time in a model organism. The optimal method for calculating degree-days depends upon where key temperature threshold parameters fall relative to the daily minimum and maximum temperatures, as well as the shape of the daily temperature curve. No method is shown to be universally superior, though one commonly used method, the daily average method, consistently provides accurate results. The sensitivity of model projections to these methodological issues highlights the need to make structural and parametric selections based on a careful consideration of the specific biological response of the organism under study, and the specific temperature conditions of the geographic regions of interest. When degree-day model limitations are considered and model assumptions met, the models can be a powerful tool for studying temperature-dependent development.
International Nuclear Information System (INIS)
Wijaya Murti Indriatama; Trikoesoemaningtyas; Syarifah Iis Aisyah; Soeranto Human
2016-01-01
Gamma irradiation techniques have significant effect on frequency and spectrum of macro-mutation but the study of its effect on micro-mutation that related to genetic variability on mutated population is very limited. The aim of this research was to study the effect of gamma irradiation techniques on genetic variability and heritability of wheat agronomic characters at M2 generation. This research was conducted from July to November 2014, at Cibadak experimental station, Indonesian Center for Agricultural Biotechnology and Genetic Resources Research and Development, Ministry of Agriculture. Three introduced wheat breeding lines (F-44, Kiran-95 & WL-711) were treated by 3 gamma irradiation techniques (acute, fractionated and intermittent). M1 generation of combination treatments were planted and harvested its spike individually per plants. As M2 generation, seeds of 75 M1 spike were planted at the field with one row one spike method and evaluated on the agronomic characters and its genetic components. The used of gamma irradiation techniques decreased mean but increased range values of agronomic traits in M2 populations. Fractionated irradiation induced higher mean and wider range on spike length and number of spike let per spike than other irradiation techniques. Fractionated and intermittent irradiation resulted greater variability of grain weight per plant than acute irradiation. The number of tillers, spike weight, grain weight per spike and grain weight per plant on M2 population resulted from induction of three gamma irradiation techniques have high estimated heritability and broad sense of genetic variability coefficient values. The three gamma irradiation techniques increased genetic variability of agronomic traits on M2 populations, except plant height. (author)
DEFF Research Database (Denmark)
Landschützer, P.; Gruber, N.; Bakker, D.C.E.
2013-01-01
The Atlantic Ocean is one of the most important sinks for atmospheric carbon dioxide (CO2), but this sink is known to vary substantially in time. Here we use surface ocean CO2 observations to estimate this sink and the temporal variability from 1998 to 2007 in the Atlantic Ocean. We benefit from ......, leading to a substantial trend toward a stronger CO2 sink for the entire South Atlantic (–0.14 Pg C yr–1 decade–1). The Atlantic carbon sink varies relatively little on inter-annual time-scales (±0.04 Pg C yr–1; 1σ)......The Atlantic Ocean is one of the most important sinks for atmospheric carbon dioxide (CO2), but this sink is known to vary substantially in time. Here we use surface ocean CO2 observations to estimate this sink and the temporal variability from 1998 to 2007 in the Atlantic Ocean. We benefit from (i...... poleward of 40° N, but many other parts of the North Atlantic increased more slowly, resulting in a barely changing Atlantic carbon sink north of the equator (–0.007 Pg C yr–1 decade–1). Surface ocean pCO2 was also increasing less than that of the atmosphere over most of the Atlantic south of the equator...
Bae, Kyung-Hoon; Lee, Jungjoon; Kim, Eun-Soo
2008-06-01
In this paper, a variable disparity estimation (VDE)-based intermediate view reconstruction (IVR) in dynamic flow allocation (DFA) over an Ethernet passive optical network (EPON)-based access network is proposed. In the proposed system, the stereoscopic images are estimated by a variable block-matching algorithm (VBMA), and they are transmitted to the receiver through DFA over EPON. This scheme improves a priority-based access network by converting it to a flow-based access network with a new access mechanism and scheduling algorithm, and then 16-view images are synthesized by the IVR using VDE. Some experimental results indicate that the proposed system improves the peak-signal-to-noise ratio (PSNR) to as high as 4.86 dB and reduces the processing time to 3.52 s. Additionally, the network service provider can provide upper limits of transmission delays by the flow. The modeling and simulation results, including mathematical analyses, from this scheme are also provided.
McMillan, Hilary; Srinivasan, Ms
2015-04-01
Hydrologists recognise the importance of vertical drainage and deep flow paths in runoff generation, even in headwater catchments. Both soil and groundwater stores are highly variable over multiple scales, and the distribution of water has a strong control on flow rates and timing. In this study, we instrumented an upland headwater catchment in New Zealand to measure the temporal and spatial variation in unsaturated and saturated-zone responses. In NZ, upland catchments are the source of much of the water used in lowland agriculture, but the hydrology of such catchments and their role in water partitioning, storage and transport is poorly understood. The study area is the Langs Gully catchment in the North Branch of the Waipara River, Canterbury: this catchment was chosen to be representative of the foothills environment, with lightly managed dryland pasture and native Matagouri shrub vegetation cover. Over a period of 16 months we measured continuous soil moisture at 32 locations and near-surface water table (versus hillslope locations, and convergent versus divergent hillslopes. We found that temporal variability is strongly controlled by the climatic seasonal cycle, for both soil moisture and water table, and for both the mean and extremes of their distributions. Groundwater is a larger water storage component than soil moisture, and the difference increases with catchment wetness. The spatial standard deviation of both soil moisture and groundwater is larger in winter than in summer. It peaks during rainfall events due to partial saturation of the catchment, and also rises in spring as different locations dry out at different rates. The most important controls on spatial variability are aspect and distance from stream. South-facing and near-stream locations have higher water tables and more, larger soil moisture wetting events. Typical hydrological models do not explicitly account for aspect, but our results suggest that it is an important factor in hillslope
Carroll, R. J.
2012-01-24
With the advent of Internet-based 24-hour recall (24HR) instruments, it is now possible to envision their use in cohort studies investigating the relation between nutrition and disease. Understanding that all dietary assessment instruments are subject to measurement errors and correcting for them under the assumption that the 24HR is unbiased for usual intake, here the authors simultaneously address precision, power, and sample size under the following 3 conditions: 1) 1-12 24HRs; 2) a single calibrated food frequency questionnaire (FFQ); and 3) a combination of 24HR and FFQ data. Using data from the Eating at America\\'s Table Study (1997-1998), the authors found that 4-6 administrations of the 24HR is optimal for most nutrients and food groups and that combined use of multiple 24HR and FFQ data sometimes provides data superior to use of either method alone, especially for foods that are not regularly consumed. For all food groups but the most rarely consumed, use of 2-4 recalls alone, with or without additional FFQ data, was superior to use of FFQ data alone. Thus, if self-administered automated 24HRs are to be used in cohort studies, 4-6 administrations of the 24HR should be considered along with administration of an FFQ.
Edmunson, J.; Gaskin, J. A.; Danilatos, G.; Doloboff, I. J.; Effinger, M. R.; Harvey, R. P.; Jerman, G. A.; Klein-Schoder, R.; Mackie, W.; Magera, B.;
2016-01-01
The Miniaturized Variable Pressure Scanning Electron Microscope(MVP-SEM) project, funded by the NASA Planetary Instrument Concepts for the Advancement of Solar System Observations (PICASSO) Research Opportunities in Space and Earth Science (ROSES), will build upon previous miniaturized SEM designs for lunar and International Space Station (ISS) applications and recent advancements in variable pressure SEM's to design and build a SEM to complete analyses of samples on the surface of Mars using the atmosphere as an imaging medium. By the end of the PICASSO work, a prototype of the primary proof-of-concept components (i.e., the electron gun, focusing optics and scanning system)will be assembled and preliminary testing in a Mars analog chamber at the Jet Propulsion Laboratory will be completed to partially fulfill Technology Readiness Level to 5 requirements for those components. The team plans to have Secondary Electron Imaging(SEI), Backscattered Electron (BSE) detection, and Energy Dispersive Spectroscopy (EDS) capabilities through the MVP-SEM.
SPECIES-SPECIFIC FOREST VARIABLE ESTIMATION USING NON-PARAMETRIC MODELING OF MULTI-SPECTRAL PHOTOGRAMMETRIC POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
J. Bohlin
2012-07-01
Full Text Available The recent development in software for automatic photogrammetric processing of multispectral aerial imagery, and the growing nation-wide availability of Digital Elevation Model (DEM data, are about to revolutionize data capture for forest management planning in Scandinavia. Using only already available aerial imagery and ALS-assessed DEM data, raster estimates of the forest variables mean tree height, basal area, total stem volume, and species-specific stem volumes were produced and evaluated. The study was conducted at a coniferous hemi-boreal test site in southern Sweden (lat. 58° N, long. 13° E. Digital aerial images from the Zeiss/Intergraph Digital Mapping Camera system were used to produce 3D point-cloud data with spectral information. Metrics were calculated for 696 field plots (10 m radius from point-cloud data and used in k-MSN to estimate forest variables. For these stands, the tree height ranged from 1.4 to 33.0 m (18.1 m mean, stem volume from 0 to 829 m3 ha-1 (249 m3 ha-1 mean and basal area from 0 to 62.2 m2 ha-1 (26.1 m2 ha-1 mean, with mean stand size of 2.8 ha. Estimates made using digital aerial images corresponding to the standard acquisition of the Swedish National Land Survey (Lantmäteriet showed RMSEs (in percent of the surveyed stand mean of 7.5% for tree height, 11.4% for basal area, 13.2% for total stem volume, 90.6% for pine stem volume, 26.4 for spruce stem volume, and 72.6% for deciduous stem volume. The results imply that photogrammetric matching of digital aerial images has significant potential for operational use in forestry.
Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.
2017-01-01
A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.
Directory of Open Access Journals (Sweden)
Xinyao Hu
2018-02-01
Full Text Available Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs. The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64 (left foot and 2.72 mm (±0.83 (right foot along the medial–lateral direction, and 9.17 mm (±1.98 (left foot and 11.19 mm (±2.98 (right foot along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
International Nuclear Information System (INIS)
Shakespeare, Thomas Philip; Dwyer, Mary; Mukherjee, Rahul; Yeghiaian-Alvandi, Roland; Gebski, Val
2002-01-01
Purpose: Estimating the risks of radiotherapy (RT) toxicity is important for informed consent; however, the consistency in estimates has not been studied. This study aimed to explore the variability and factors affecting risk estimates (REs). Methods and Materials: A survey was mailed to Australian radiation oncologists, who were asked to estimate risks of RT complications given 49 clinical scenarios. The REs were assessed for association with oncologist experience, subspecialization, and private practice. Results: The REs were extremely variable, with a 50-fold median variability. The least variability (sevenfold) was for estimates of late, small intestinal perforation/obstruction after a one-third volume received 50 Gy with concurrent 5-fluorouracil (RE range 5-35%). The variation between the smallest and largest REs in 17 scenarios was ≥100-fold. The years of experience was significantly associated with REs of soft/connective-tissue toxicity (p=0.01) but inversely associated with estimates of neurologic/central nervous system toxicity (p=0.08). Ninety-six percent of respondents believed REs were important to RT practice; only 24% rated evidence to support their estimates as good. Sixty-seven percent believed national/international groups should pursue the issue further. Conclusion: Enormous variability exists in REs for normal tissue complications due to RT that is influenced by the years of experience. Risk estimation is perceived as an important issue without a good evidence base. Additional studies are strongly recommended
Kim, Young-Min; Park, Jae-Won; Cheong, Hae-Kwan
2012-09-01
Climate change may affect Plasmodium vivax malaria transmission in a wide region including both subtropical and temperate areas. We aimed to estimate the effects of climatic variables on the transmission of P. vivax in temperate regions. We estimated the effects of climatic factors on P. vivax malaria transmission using data on weekly numbers of malaria cases for the years 2001-2009 in the Republic of Korea. Generalized linear Poisson models and distributed lag nonlinear models (DLNM) were adopted to estimate the effects of temperature, relative humidity, temperature fluctuation, duration of sunshine, and rainfall on malaria transmission while adjusting for seasonal variation, between-year variation, and other climatic factors. A 1°C increase in temperature was associated with a 17.7% [95% confidence interval (CI): 16.9, 18.6%] increase in malaria incidence after a 3-week lag, a 10% rise in relative humidity was associated with 40.7% (95% CI: -44.3, -36.9%) decrease in malaria after a 7-week lag, a 1°C increase in the diurnal temperature range was associated with a 24.1% (95% CI: -26.7, -21.4%) decrease in malaria after a 7-week lag, and a 10-hr increase in sunshine per week was associated with a 5.1% (95% CI: -8.4, -1.7%) decrease in malaria after a 2-week lag. The cumulative relative risk for a 10-mm increase in rainfall (≤ 350 mm) on P. vivax malaria was 3.61 (95% CI: 1.69, 7.72) based on a DLNM with a 10-week maximum lag. Our findings suggest that malaria transmission in temperate areas is highly dependent on climate factors. In addition, lagged estimates of the effect of rainfall on malaria are consistent with the time necessary for mosquito development and P. vivax incubation.
DEFF Research Database (Denmark)
Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O
1991-01-01
The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...
Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C
2018-04-01
Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.
Herman, Jay R.
2010-12-01
Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone Ω, U(Ω/200)-RAF, where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 nm) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.
Directory of Open Access Journals (Sweden)
Bolívar Erazo
2018-02-01
Full Text Available A dense rain-gauge network within continental Ecuador was used to evaluate the quality of various products of rainfall data over the Pacific slope and coast of Ecuador (EPSC. A cokriging interpolation method is applied to the rain-gauge data yielding a gridded product at 5-km resolution covering the period 1965–2015. This product is compared with the Global Precipitation Climatology Centre (GPCC dataset, the Climatic Research Unit–University of East Anglia (CRU dataset, the Tropical Rainfall Measuring Mission (TRMM/TMPA 3B43 Version 7 dataset and the ERA-Interim Reanalysis. The analysis reveals that TRMM data show the most realistic features. The relative bias index (Rbias indicates that TRMM data is closer to the observations, mainly over lowlands (mean Rbias of 7% but have more limitations in reproducing the rainfall variability over the Andes (mean Rbias of −28%. The average RMSE and Rbias of 68.7 and −2.8% of TRMM are comparable with the GPCC (69.8 and 5.7% and CRU (102.3 and −2.3% products. This study also focuses on the rainfall inter-annual variability over the study region which experiences floods that have caused high economic losses during extreme El Niño events. Finally, our analysis evaluates the ability of TRMM data to reproduce rainfall events during El Niño years over the study area and the large basins of Esmeraldas and Guayas rivers. The results show that TRMM estimates report reasonable levels of heavy rainfall detection (for the extreme 1998 El Niño event over the EPSC and specifically towards the center-south of the EPSC (Guayas basin but present underestimations for the moderate El Niño of 2002–2003 event and the weak 2009–2010 event. Generally, the rainfall seasonal features, quantity and long-term climatology patterns are relatively well estimated by TRMM.
Crown, William H
2014-02-01
This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.
International Nuclear Information System (INIS)
Weill, Jacky; Fabre, Rene.
1981-01-01
This article sums up the Research and Development effort at present being carried out in the five following fields of applications: Health physics and Radioprospection, Control of nuclear reactors, Plant control (preparation and reprocessing of the fuel, testing of nuclear substances, etc.), Research laboratory instrumentation, Detectors. It also sets the place of French industrial activities by means of an estimate of the French market, production and flow of trading with other countries [fr
Energy Technology Data Exchange (ETDEWEB)
Guerra Hernandez, J.; Gonzalez-Ferreiro, E.; Sarmento, A.; Silva, J.; Nunes, A.; Correia, A.C.; Fontes, L.; Tomé, M.; Diaz-Varela, D.
2016-07-01
Aim of the study: The study aims to analyse the potential use of low‑cost unmanned aerial vehicle (UAV) imagery for the estimation of Pinus pinea L. variables at the individual tree level (position, tree height and crown diameter). Area of study: This study was conducted under the PINEA project focused on 16 ha of umbrella pine afforestation (Portugal) subjected to different treatments. Material and methods: The workflow involved: a) image acquisition with consumer‑grade cameras on board an UAV; b) orthomosaic and digital surface model (DSM) generation using structure-from-motion (SfM) image reconstruction; and c) automatic individual tree segmentation by using a mixed pixel‑ and region‑based based algorithm. Main results: The results of individual tree segmentation (position, height and crown diameter) were validated using field measurements from 3 inventory plots in the study area. All the trees of the plots were correctly detected. The RMSE values for the predicted heights and crown widths were 0.45 m and 0.63 m, respectively. Research highlights: The results demonstrate that tree variables can be automatically extracted from high resolution imagery. We highlight the use of UAV systems as a fast, reliable and cost‑effective technique for small scale applications. (Author)
Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R
2009-12-01
To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.
International Nuclear Information System (INIS)
Guerra Hernandez, J.; Gonzalez-Ferreiro, E.; Sarmento, A.; Silva, J.; Nunes, A.; Correia, A.C.; Fontes, L.; Tomé, M.; Diaz-Varela, D.
2016-01-01
Aim of the study: The study aims to analyse the potential use of low‑cost unmanned aerial vehicle (UAV) imagery for the estimation of Pinus pinea L. variables at the individual tree level (position, tree height and crown diameter). Area of study: This study was conducted under the PINEA project focused on 16 ha of umbrella pine afforestation (Portugal) subjected to different treatments. Material and methods: The workflow involved: a) image acquisition with consumer‑grade cameras on board an UAV; b) orthomosaic and digital surface model (DSM) generation using structure-from-motion (SfM) image reconstruction; and c) automatic individual tree segmentation by using a mixed pixel‑ and region‑based based algorithm. Main results: The results of individual tree segmentation (position, height and crown diameter) were validated using field measurements from 3 inventory plots in the study area. All the trees of the plots were correctly detected. The RMSE values for the predicted heights and crown widths were 0.45 m and 0.63 m, respectively. Research highlights: The results demonstrate that tree variables can be automatically extracted from high resolution imagery. We highlight the use of UAV systems as a fast, reliable and cost‑effective technique for small scale applications. (Author)
Dwinovantyo, Angga; Manik, Henry M.; Prartono, Tri; Susilohadi; Ilahude, Delyuzar
2017-01-01
Measurement of suspended sediment concentration (SSC) is one of the parameters needed to determine the characteristics of sediment transport. However, the measurement of SSC nowadays still uses conventional technique and it has limitations; especially in temporal resolution. With advanced technology, the measurement can use hydroacoustic technology such as Acoustic Doppler Current Profiler (ADCP). ADCP measures the intensity of backscatter as echo intensity unit from sediment particles. The frequency of ADCP used in this study was 400 kHz. The samples were measured and collected from Lembeh Strait, North Sulawesi. The highest concentration of suspended sediment was 98.89 mg L-1 and the lowest was 45.20 mg L-1. Time series data showed the tidal condition affected the SSC. From the research, we also made correction from sound signal losses effect such as spherical spreading and sound absorption to get more accurate results by eliminating these parameters in echo intensity data. Simple linear regression analysis at echo intensity measured from ADCP to direct measurement of SSC was performed to obtain the estimation of the SSC. The comparison result of estimation of SSC from ADCP measurements and SSC from laboratory analyses was insignificantly different based on t-test statistical analysis with 95% confidence interval percentage.
Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T
2013-07-02
Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.
Li, Lianfa; Wu, Anna H.; Cheng, Iona; Chen, Jiu-Chiuan; Wu, Jun
2017-10-01
Monitoring of fine particulate matter with diameter health outcomes such as cancer. In this study, we aimed to design a flexible approach to reliably estimate historical PM2.5 concentrations by incorporating spatial effect and the measurements of existing co-pollutants such as particulate matter with diameter additive non-linear model. The spatiotemporal model was evaluated, using leaving-one-site-month-out cross validation. Our final daily model had an R2 of 0.81, with PM10, meteorological variables, and spatial autocorrelation, explaining 55%, 10%, and 10% of the variance in PM2.5 concentrations, respectively. The model had a cross-validation R2 of 0.83 for monthly PM2.5 concentrations (N = 8170) and 0.79 for daily PM2.5 concentrations (N = 51,421) with few extreme values in prediction. Further, the incorporation of spatial effects reduced bias in predictions. Our approach achieved a cross validation R2 of 0.61 for the daily model when PM10 was replaced by total suspended particulate. Our model can robustly estimate historical PM2.5 concentrations in California when PM2.5 measurements were not available.
Directory of Open Access Journals (Sweden)
Hukharnsusatrue, A.
2005-11-01
Full Text Available The objective of this research is to compare multiple regression coefficients estimating methods with existence of multicollinearity among independent variables. The estimation methods are Ordinary Least Squares method (OLS, Restricted Least Squares method (RLS, Restricted Ridge Regression method (RRR and Restricted Liu method (RL when restrictions are true and restrictions are not true. The study used the Monte Carlo Simulation method. The experiment was repeated 1,000 times under each situation. The analyzed results of the data are demonstrated as follows. CASE 1: The restrictions are true. In all cases, RRR and RL methods have a smaller Average Mean Square Error (AMSE than OLS and RLS method, respectively. RRR method provides the smallest AMSE when the level of correlations is high and also provides the smallest AMSE for all level of correlations and all sample sizes when standard deviation is equal to 5. However, RL method provides the smallest AMSE when the level of correlations is low and middle, except in the case of standard deviation equal to 3, small sample sizes, RRR method provides the smallest AMSE.The AMSE varies with, most to least, respectively, level of correlations, standard deviation and number of independent variables but inversely with to sample size.CASE 2: The restrictions are not true.In all cases, RRR method provides the smallest AMSE, except in the case of standard deviation equal to 1 and error of restrictions equal to 5%, OLS method provides the smallest AMSE when the level of correlations is low or median and there is a large sample size, but the small sample sizes, RL method provides the smallest AMSE. In addition, when error of restrictions is increased, OLS method provides the smallest AMSE for all level, of correlations and all sample sizes, except when the level of correlations is high and sample sizes small. Moreover, the case OLS method provides the smallest AMSE, the most RLS method has a smaller AMSE than
Kronholm, Scott C.; Capel, Paul D.
2016-01-01
Mixing models are a commonly used method for hydrograph separation, but can be hindered by the subjective choice of the end-member tracer concentrations. This work tests a new variant of mixing model that uses high-frequency measures of two tracers and streamflow to separate total streamflow into water from slowflow and fastflow sources. The ratio between the concentrations of the two tracers is used to create a time-variable estimate of the concentration of each tracer in the fastflow end-member. Multiple synthetic data sets, and data from two hydrologically diverse streams, are used to test the performance and limitations of the new model (two-tracer ratio-based mixing model: TRaMM). When applied to the synthetic streams under many different scenarios, the TRaMM produces results that were reasonable approximations of the actual values of fastflow discharge (±0.1% of maximum fastflow) and fastflow tracer concentrations (±9.5% and ±16% of maximum fastflow nitrate concentration and specific conductance, respectively). With real stream data, the TRaMM produces high-frequency estimates of slowflow and fastflow discharge that align with expectations for each stream based on their respective hydrologic settings. The use of two tracers with the TRaMM provides an innovative and objective approach for estimating high-frequency fastflow concentrations and contributions of fastflow water to the stream. This provides useful information for tracking chemical movement to streams and allows for better selection and implementation of water quality management strategies.
Constantin, Nechita; Ionel, Popa; Francisca, Chiriloaei
2017-04-01
Actual climate conditions are in permanent changes and trees can provide information on the magnitude of current modifications compared with the past. Through dendrochronological methods we have analyzed a network composed of 17 chronologies belonging to the Quercus genus to highlight the role of macro-climate induced by the major landforms in printing a specific growth response pattern to climate. The transect is located in North Romania following a straight line of about 400 km length, crossing the Carpathian Arch. The aim of this study is to highlight the areas with homogenous response of trees to the climatic factors. This fact is important for building long dendrochronological series considering that it is appreciated reduced scale applicability. It is known that in the study area covered with oak-trees the number of long series used for climate reconstructions is reduced. The material used is represented by the dendrochronological series which were sampled according to the standards accepted by the scientific literature. The statistical methods used consist in employing PCA analysis to highlight the spatial segregation, related to PC1 scores. Also hierarchical cluster analysis (HCA) was applied in order to group the series with common features on basis of similarities/dissimilarities. The Euclidian distance between the chronologies was calculated and sampled areas were grouped according to Ward minimum variance method. In addition we performed a redundancy analysis (RDA) which the ordination of the axes it is a linear combination of supplied environmental variables. The correlation analysis with climate factors was accomplished by using bootstrap correlation. The pointer year analysis (the selection criteria is PC1 scores <-0.5) was also performed. The results were related to the postglacial recolonization routes obtained by analyzing the chloroplast DNA.
Directory of Open Access Journals (Sweden)
Qihao Weng
2013-03-01
Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.
International Nuclear Information System (INIS)
Baur, Albert H.; Lauf, Steffen; Förster, Michael; Kleinschmit, Birgit
2015-01-01
Substantive and concerted action is needed to mitigate climate change. However, international negotiations struggle to adopt ambitious legislation and to anticipate more climate-friendly developments. Thus, stronger actions are needed from other players. Cities, being greenhouse gas emission centers, play a key role in promoting the climate change mitigation movement by becoming hubs for smart and low-carbon lifestyles. In this context, a stronger linkage between greenhouse gas emissions and urban development and policy-making seems promising. Therefore, simple approaches are needed to objectively identify crucial emission drivers for deriving appropriate emission reduction strategies. In analyzing 44 European cities, the authors investigate possible socioeconomic and spatial determinants of urban greenhouse gas emissions. Multiple statistical analyses reveal that the average household size and the edge density of discontinuous dense urban fabric explain up to 86% of the total variance of greenhouse gas emissions of EU cities (when controlled for varying electricity carbon intensities). Finally, based on these findings, a multiple regression model is presented to determine greenhouse gas emissions. It is independently evaluated with ten further EU cities. The reliance on only two indicators shows that the model can be easily applied in addressing important greenhouse gas emission sources of European urbanites, when varying power generations are considered. This knowledge can help cities develop adequate climate change mitigation strategies and promote respective policies on the EU or the regional level. The results can further be used to derive first estimates of urban greenhouse gas emissions, if no other analyses are available. - Highlights: • Two variables determine urban GHG emissions in Europe, assuming equal power generation. • Household size, inner-urban compactness and power generation drive urban GHG emissions. • Climate policies should consider
Energy Technology Data Exchange (ETDEWEB)
Baur, Albert H., E-mail: Albert.H.Baur@campus.tu-berlin.de; Lauf, Steffen; Förster, Michael; Kleinschmit, Birgit
2015-07-01
Substantive and concerted action is needed to mitigate climate change. However, international negotiations struggle to adopt ambitious legislation and to anticipate more climate-friendly developments. Thus, stronger actions are needed from other players. Cities, being greenhouse gas emission centers, play a key role in promoting the climate change mitigation movement by becoming hubs for smart and low-carbon lifestyles. In this context, a stronger linkage between greenhouse gas emissions and urban development and policy-making seems promising. Therefore, simple approaches are needed to objectively identify crucial emission drivers for deriving appropriate emission reduction strategies. In analyzing 44 European cities, the authors investigate possible socioeconomic and spatial determinants of urban greenhouse gas emissions. Multiple statistical analyses reveal that the average household size and the edge density of discontinuous dense urban fabric explain up to 86% of the total variance of greenhouse gas emissions of EU cities (when controlled for varying electricity carbon intensities). Finally, based on these findings, a multiple regression model is presented to determine greenhouse gas emissions. It is independently evaluated with ten further EU cities. The reliance on only two indicators shows that the model can be easily applied in addressing important greenhouse gas emission sources of European urbanites, when varying power generations are considered. This knowledge can help cities develop adequate climate change mitigation strategies and promote respective policies on the EU or the regional level. The results can further be used to derive first estimates of urban greenhouse gas emissions, if no other analyses are available. - Highlights: • Two variables determine urban GHG emissions in Europe, assuming equal power generation. • Household size, inner-urban compactness and power generation drive urban GHG emissions. • Climate policies should consider
Zhang, Zhao; Song, Xiao; Chen, Yi; Wang, Pin; Wei, Xing; Tao, Fulu
2015-05-01
Although many studies have indicated the consistent impact of warming on the natural ecosystem (e.g., an early flowering and prolonged growing period), our knowledge of the impacts on agricultural systems is still poorly understood. In this study, spatiotemporal variability of the heading-flowering stages of single rice was detected and compared at three different scales using field-based methods (FBMs) and satellite-based methods (SBMs). The heading-flowering stages from 2000 to 2009 with a spatial resolution of 1 km were extracted from the SPOT/VGT NDVI time series data using the Savizky-Golay filtering method in the areas in China dominated by single rice of Northeast China (NE), the middle-lower Yangtze River Valley (YZ), the Sichuan Basin (SC), and the Yunnan-Guizhou Plateau (YG). We found that approximately 52.6 and 76.3 % of the estimated heading-flowering stages by a SBM were within ±5 and ±10 days estimation error (a root mean square error (RMSE) of 8.76 days) when compared with those determined by a FBM. Both the FBM data and the SBM data had indicated a similar spatial pattern, with the earliest annual average heading-flowering stages in SC, followed by YG, NE, and YZ, which were inconsistent with the patterns reported in natural ecosystems. Moreover, diverse temporal trends were also detected in the four regions due to different climate conditions and agronomic factors such as cultivar shifts. Nevertheless, there were no significant differences (p > 0.05) between the FBM and the SBM in both the regional average value of the phenological stages and the trends, implying the consistency and rationality of the SBM at three scales.
Voicescu, Sonia A; Michaud, David S; Feder, Katya; Marro, Leonora; Than, John; Guay, Mireille; Denning, Allison; Bower, Tara; van den Berg, Frits; Broner, Norm; Lavigne, Eric
2016-03-01
The Community Noise and Health Study conducted by Health Canada included randomly selected participants aged 18-79 yrs (606 males, 632 females, response rate 78.9%), living between 0.25 and 11.22 km from operational wind turbines. Annoyance to wind turbine noise (WTN) and other features, including shadow flicker (SF) was assessed. The current analysis reports on the degree to which estimating high annoyance to wind turbine shadow flicker (HAWTSF) was improved when variables known to be related to WTN exposure were also considered. As SF exposure increased [calculated as maximum minutes per day (SFm)], HAWTSF increased from 3.8% at 0 ≤ SFm wind turbine-related features, concern for physical safety, and noise sensitivity. Reported dizziness was also retained in the final model at p = 0.0581. Study findings add to the growing science base in this area and may be helpful in identifying factors associated with community reactions to SF exposure from wind turbines.
Liu, Yang; Paciorek, Christopher J; Koutrakis, Petros
2009-06-01
Studies of chronic health effects due to exposures to particulate matter with aerodynamic diameters meteorologic information to estimate ground-level PM(2.5) concentrations. We developed a two-stage generalized additive model (GAM) for U.S. Environmental Protection Agency PM(2.5) concentrations in a domain centered in Massachusetts. The AOD model represents conditions when AOD retrieval is successful; the non-AOD model represents conditions when AOD is missing in the domain. The AOD model has a higher predicting power judged by adjusted R(2) (0.79) than does the non-AOD model (0.48). The predicted PM(2.5) concentrations by the AOD model are, on average, 0.8-0.9 microg/m(3) higher than the non-AOD model predictions, with a more smooth spatial distribution, higher concentrations in rural areas, and the highest concentrations in areas other than major urban centers. Although AOD is a highly significant predictor of PM(2.5), meteorologic parameters are major contributors to the better performance of the AOD model. GOES aerosol/smoke product (GASP) AOD is able to summarize a set of weather and land use conditions that stratify PM(2.5) concentrations into two different spatial patterns. Even if land use regression models do not include AOD as a predictor variable, two separate models should be fitted to account for different PM(2.5) spatial patterns related to AOD availability.
Cole, Devon B.; Zhang, Shuang; Planavsky, Noah J.
2017-10-01
The enrichment and depletion of redox sensitive trace metals in marine sediments have been used extensively as paleoredox proxies. The trace metals in shale are comprised of both detrital (transported or particulate) and authigenic (precipitated, redox-driven) constituents, potentially complicating the use of this suite of proxies. Untangling the influence of these components is vital for the interpretation of enrichments, depletions, and isotopic signals of iron (Fe), chromium (Cr), uranium (U), and vanadium (V) observed in the rock record. Traditionally, a single crustal average is used as a cutoff for detrital input, and concentrations above or below this value are interpreted as redox derived authigenic enrichment or depletion, while authigenic isotopic signals are frequently corrected for an assumed detrital contribution. Building from an extensive study of soils across the continental United States - which upon transport will become marine sediments - and their elemental concentrations, we find large deviations from accepted crustal averages in redox-sensitive metals (Fe, Cr, U, V) compared to typical detrital tracers (Al, Ti, Sc, Th) and provide new estimates for detrital contributions to the ocean. The variability in these elemental ratios is present over large areas, comparable to the catchment-size of major rivers around the globe. This heterogeneity in detrital flux highlights the need for a reevaluation of how the detrital contribution is assessed in trace metal studies, and the use of confidence intervals rather than single average values, especially in local studies or in the case of small authigenic enrichments.
John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W
2016-01-01
When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.
Problems with radiological surveillance instrumentation
International Nuclear Information System (INIS)
Swinth, K.L.; Tanner, J.E.; Fleming, D.M.
1984-09-01
Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements
Problems with radiological surveillance instrumentation
International Nuclear Information System (INIS)
Swinth, K.L.; Tanner, J.E.; Fleming, D.M.
1985-01-01
Many radiological surveillance instruments are in use at DOE facilities throughout the country. These instruments are an essential part of all health physics programs, and poor instrument performance can increase program costs or compromise program effectiveness. Generic data from simple tests on newly purchased instruments shows that many instruments will not meet requirements due to manufacturing defects. In other cases, lack of consideration of instrument use has resulted in poor acceptance of instruments and poor reliability. The performance of instruments is highly variable for electronic and mechanical performance, radiation response, susceptibility to interferences and response to environmental factors. Poor instrument performance in these areas can lead to errors or poor accuracy in measurements
Gallaher, Kevin T.; Mura, Marco; Todd, Wm Andrew; Harris, Tarsha L.; Kenyon, Emily; Harris, Tamara; Johnson, Karen C.; Satterfield, Suzanne; Kritchevsky, Stephen B.; Iannaccone, Alessandro
2007-01-01
The reproducibility of macular pigment optical density (MPOD) estimates in the elderly was assessed in 40 subjects (age: 79.1+/-3.5). Test-retest variability was good (Pearson's r coefficient: 0.734), with an average coefficient of variation (CV) of 18.4% and an intraclass correlation coefficient
International Nuclear Information System (INIS)
Gogolak, C.V.
1986-11-01
The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored 85 Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities
Energy Technology Data Exchange (ETDEWEB)
Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Dario Constantinescu
2016-12-01
Full Text Available Drought stress is a major abiotic stres threatening plant and crop productivity. In case of fleshy fruits, understanding Drought stress is a major abiotic stress threatening plant and crop productivity. In case of fleshy fruits, understanding mechanisms governing water and carbon accumulations and identifying genes, QTLs and phenotypes, that will enable trade-offs between fruit growth and quality under Water Deficit (WD condition is a crucial challenge for breeders and growers. In the present work, 117 recombinant inbred lines of a population of Solanum lycopersicum were phenotyped under control and WD conditions. Plant water status, fruit growth and composition were measured and data were used to calibrate a process-based model describing water and carbon fluxes in a growing fruit as a function of plant and environment. Eight genotype-dependent model parameters were estimated using a multiobjective evolutionary algorithm in order to minimize the prediction errors of fruit dry and fresh mass throughout fruit development. WD increased the fruit dry matter content (up to 85 % and decreased its fresh weight (up to 60 %, big fruit size genotypes being the most sensitive. The mean normalized root mean squared errors of the predictions ranged between 16-18 % in the population. Variability in model genotypic parameters allowed us to explore diverse genetic strategies in response to WD. An interesting group of genotypes could be discriminated in which i the low loss of fresh mass under WD was associated with high active uptake of sugars and low value of the maximum cell wall extensibility, and ii the high dry matter content in control treatment (C was associated with a slow decrease of mass flow. Using 501 SNP markers genotyped across the genome, a QTL analysis of model parameters allowed to detect three main QTLs related to xylem and phloem conductivities, on chromosomes 2, 4 and 8. The model was then applied to design ideotypes with high dry matter
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
Linear latent variable models: the lava-package
DEFF Research Database (Denmark)
Holst, Klaus Kähler; Budtz-Jørgensen, Esben
2013-01-01
are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...
Directory of Open Access Journals (Sweden)
Juan Guerra Hernandez
2016-07-01
Research highlights: The results demonstrate that tree variables can be automatically extracted from high resolution imagery. We highlight the use of UAV systems as a fast, reliable and cost‑effective technique for small scale applications. Keywords: Unmanned aerial systems (UAS; forest inventory; tree crown variables; 3D image modelling; canopy height model (CHM; object‑based image analysis (OBIA, structure‑from‑motion (SfM.
Directory of Open Access Journals (Sweden)
Zhi Yang
2016-10-01
Full Text Available Rice growth monitoring is very important as rice is one of the staple crops of the world. Rice variables as quantitative indicators of rice growth are critical for farming management and yield estimation, and synthetic aperture radar (SAR has great advantages for monitoring rice variables due to its all-weather observation capability. In this study, eight temporal RADARSAT-2 full-polarimetric SAR images were acquired during rice growth cycle and a modified water cloud model (MWCM was proposed, in which the heterogeneity of the rice canopy in the horizontal direction and its phenological changes were considered when the double-bounce scattering between the rice canopy and the underlying surface was firstly considered as well. Then, three scattering components from an improved polarimetric decomposition were coupled with the MWCM, instead of the backscattering coefficients. Using a genetic algorithm, eight rice variables were estimated, such as the leaf area index (LAI, rice height (h, and the fresh and dry biomass of ears (Fe and De. The accuracy validation showed the MWCM was suitable for the estimation of rice variables during the whole growth season. The validation results showed that the MWCM could predict the temporal behaviors of the rice variables well during the growth cycle (R2 > 0.8. Compared with the original water cloud model (WCM, the relative errors of rice variables with the MWCM were much smaller, especially in the vegetation phase (approximately 15% smaller. Finally, it was discussed that the MWCM could be used, theoretically, for extensive applications since the empirical coefficients in the MWCM were determined in general cases, but more applications of the MWCM are necessary in future work.
International Nuclear Information System (INIS)
Roseane Pagliaro Avegliano; Vera Akiko Maihara
2014-01-01
Total Diet Studies (TDS) have been carried out to estimate dietary intakes of the essential and toxic elements for a large-scale population over a specific period of time. In this study, the TDS was based on the evaluation of food representing a Market Basket (MB), which reflected the dietary habits of the Sao Paulo State population, corresponding to 72 % of the average food consumption for the state of Sao Paulo. In the present Total Diet Study, magnesium and manganese concentrations were determined in 30 of the most consumed food groups of a MB of Sao Paulo State, Brazil. Instrumental Neutron Activation Analysis (INAA) has been successfully used on a regularly basis in several areas of nutrition and foodstuffs. Element concentrations were determined by INAA in freeze-dried samples and ranged in mg kg -1 . Mg 41.4 (fats)-5287 (coffee) and Mn 0.12 (prime grade beef)-32.9 (coffee). The average daily Mg and Mn intake was calculated by multiplying the concentration of each element in each table-ready food group by the respective weight (g day -1 ) of the food group in the MB and adding the products from all food groups. The results of daily dietary intakes in this study were: Mg 174.8 and Mn 1.34 mg day -1 . Theses values were lower than the adequate intake (AI) proposed by the Food and Nutrition Board of the Institute of Medicine (USA National Academy) for adults. The low levels of Mg and Mn intakes presented in this TDS are probably due to the fact that MB of this study represented only 72 % of the weight of the most consumed household foods of Sao Paulo State. (author)
Cameron, J F; Silverleaf, D J
1971-01-01
International Series of Monographs in Nuclear Energy, Volume 107: Radioisotope Instruments, Part 1 focuses on the design and applications of instruments based on the radiation released by radioactive substances. The book first offers information on the physical basis of radioisotope instruments; technical and economic advantages of radioisotope instruments; and radiation hazard. The manuscript then discusses commercial radioisotope instruments, including radiation sources and detectors, computing and control units, and measuring heads. The text describes the applications of radioisotop
Savitha, D; Sejil, T V; Rao, Shwetha; Roshan, C J; Roshan, C J
2013-01-01
The purpose of the study was to investigate the effect of vocal and instrumental music on various physiological parameters during submaximal exercise. Each subject underwent three sessions of exercise protocol without music, with vocal music, and instrumental versions of same piece of music. The protocol consisted of 10 min treadmill exercise at 70% HR(max) and 20 min of recovery. Minute to minute heart rate and breath by breath recording of respiratory parameters, rate of energy expenditure and perceived exertion levels were measured. Music, irrespective of the presence or absence of lyrics, enabled the subjects to exercise at a significantly lower heart rate and oxygen consumption, reduced the metabolic cost and perceived exertion levels of exercise (P Music having a relaxant effect could have probably increased the parasympathetic activation leading to these effects.
Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo
2011-08-01
Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over
International Nuclear Information System (INIS)
Lovius, L.; Norman, S.; Kjellbert, N.
1990-02-01
An assessment has been made of the impact of spatial variability on the performance of a KBS-3 type repository. The uncertainties in geohydrologically related performance measures have been investigated using conductivity data from one of the Swedish study sites. The analysis was carried out with the PROPER code and the FSCF10 submodel. (authors)
Moeckel, Claudia; Gasic, Bojan; MacLeod, Matthew; Scheringer, Martin; Jones, Kevin C; Hungerbühler, Konrad
2010-06-01
Diel (24-h) concentration variations of polybrominated diphenyl ethers (PBDEs) in air were measured in the center of Zurich, Switzerland, and on Uetliberg, a hill about 5 km from the city center. Air samples were collected simultaneously at both sites over 4 h time periods for 3 consecutive days during a stable high pressure system in August 2007. Higher PBDE concentrations in the city compared to the Uetliberg site indicate that Zurich is a likely source of PBDEs to the atmosphere. A multimedia mass balance model was used to (i) explain the diel cycling pattern of PBDE concentrations observed at both sites in terms of dominant processes and (ii) estimate emission rates of PBDEs from the city to the atmosphere. We estimate that Zurich emits 0.4, 6.2, 1.6, and 0.4 kg year(-1) of the PBDE congeners 28, 47, 99, and 100, respectively. On a per-capita basis, these estimates are within the range or somewhat above those obtained in other studies using approaches based on emission factors (EF) and PBDE production, usage, and disposal data, or concentration measurements. The present approach complements emission estimates based on the EF approach and can also be applied to source areas where EFs and PBDE material flows are poorly characterized or unknown, such as electronic waste processing plants.
James E. Smith; Linda S. Heath
2015-01-01
Our approach is based on a collection of models that convert or augment the USDA Forest Inventory and Analysis program survey data to estimate all forest carbon component stocks, including live and standing dead tree aboveground and belowground biomass, forest floor (litter), down deadwood, and soil organic carbon, for each inventory plot. The data, which include...
Uijlenhoet, R.; Porrà, J.M.; Sempere Torres, D.; Creutin, J.D.
2006-01-01
A stochastic model of the microstructure of rainfall is used to derive explicit expressions for the magnitude of the sampling fluctuations in rainfall properties estimated from raindrop size measurements in stationary rainfall. The model is a marked point process, in which the points represent the
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2015-04-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the
Young, Mariel; Johannesdottir, Fjola; Poole, Ken; Shaw, Colin; Stock, J T
2018-02-01
Femoral head diameter is commonly used to estimate body mass from the skeleton. The three most frequently employed methods, designed by Ruff, Grine, and McHenry, were developed using different populations to address different research questions. They were not specifically designed for application to female remains, and their accuracy for this purpose has rarely been assessed or compared in living populations. This study analyzes the accuracy of these methods using a sample of modern British women through the use of pelvic CT scans (n = 97) and corresponding information about the individuals' known height and weight. Results showed that all methods provided reasonably accurate body mass estimates (average percent prediction errors under 20%) for the normal weight and overweight subsamples, but were inaccurate for the obese and underweight subsamples (average percent prediction errors over 20%). When women of all body mass categories were combined, the methods provided reasonable estimates (average percent prediction errors between 16 and 18%). The results demonstrate that different methods provide more accurate results within specific body mass index (BMI) ranges. The McHenry Equation provided the most accurate estimation for women of small body size, while the original Ruff Equation is most likely to be accurate if the individual was obese or severely obese. The refined Ruff Equation was the most accurate predictor of body mass on average for the entire sample, indicating that it should be utilized when there is no knowledge of the individual's body size or if the individual is assumed to be of a normal body size. The study also revealed a correlation between pubis length and body mass, and an equation for body mass estimation using pubis length was accurate in a dummy sample, suggesting that pubis length can also be used to acquire reliable body mass estimates. This has implications for how we interpret body mass in fossil hominins and has particular relevance
Luciani , Annie
2007-01-01
International audience; The expression instrumental interaction as been introduced by Claude Cadoz to identify a human-object interaction during which a human manipulates a physical object - an instrument - in order to perform a manual task. Classical examples of instrumental interaction are all the professional manual tasks: playing violin, cutting fabrics by hand, moulding a paste, etc.... Instrumental interaction differs from other types of interaction (called symbolic or iconic interactio...
Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.
2017-12-01
Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of
DEFF Research Database (Denmark)
Overgaard, Anders; Kallesøe, Carsten Skovmose; Bendtsen, Jan Dimon
2017-01-01
adgang til data, er ønsker at skabe en datadreven model til kontrol. Grundet den store mængde tilgængelig data anvendes der en metode til valg af inputs kaldet "Partial Mutual Information" (PMI). Denne artikel introducerer en metode til at inkluderer flow variable forsinkelser i PMI. Data fra en...... kontorbygning i Bjerringbro anvendes til analyse. Det vises at "Mutual Information" og et "Generalized Regression Neural Network" begge forbedres ved at anvende flow variabelt forsinkelse i forhold til at anvende konstante delay....
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
Calabrese, Evan; Badea, Alexandra; Watson, Charles; Johnson, G Allan
2013-05-01
There has been growing interest in the role of postnatal brain development in the etiology of several neurologic diseases. The rat has long been recognized as a powerful model system for studying neuropathology and the safety of pharmacologic treatments. However, the complex spatiotemporal changes that occur during rat neurodevelopment remain to be elucidated. This work establishes the first magnetic resonance histology (MRH) atlas of the developing rat brain, with an emphasis on quantitation. The atlas comprises five specimens at each of nine time points, imaged with eight distinct MR contrasts and segmented into 26 developmentally defined brain regions. The atlas was used to establish a timeline of morphometric changes and variability throughout neurodevelopment and represents a quantitative database of rat neurodevelopment for characterizing rat models of human neurologic disease. Published by Elsevier Inc.
Georgakarakos, Efstratios; Xenakis, Antonios; Georgiadis, George S
2018-02-01
We conducted a computational study to assess the hemodynamic impact of variant main body-to-iliac limb length (L1/L2) ratios on certain hemodynamic parameters acting on the endograft (EG) either on the normal bifurcated (Bif) or the cross-limb (Cx) fashion. A customary bifurcated 3D model was computationally created and meshed using the commercially available ANSYS ICEM (Ansys Inc., Canonsburg, PA, USA) software. The total length of the EG, was kept constant, while the L1/L2 ratio ranged from 0.3 to 1.5 in the Bif and Cx reconstructed EG models. The compliance of the graft was modeled using a Fluid Structure Interaction method. Important hemodynamic parameters such as pressure drop along EG, wall shear stress (WSS) and helicity were calculated. The greatest pressure decrease across EG was calculated in the peak systolic phase. With increasing L1/L2 it was found that the Pressure Drop was increasing for the Cx configuration, while decreasing for the Bif. The greatest helicity (4.1 m/s2) was seen in peak systole of Cx with ratio of 1.5 whereas its greatest value (2 m/s2) was met in peak systole in the Bif with the shortest L1/L2 ratio (0.3). Similarly, the maximum WSS value was highest (2.74Pa) in the peak systole for the 1.5 L1/L2 of the Cx configuration, while the maximum WSS value equaled 2 Pa for all length ratios of the Bif modification (with the WSS found for L1/L2=0.3 being marginally higher). There was greater discrepancy in the WSS values for all L1/L2 ratios of the Cx bifurcation compared to Bif. Different L1/L2 rations are shown to have an impact on the pressure distribution along the entire EG while the length ratio predisposing to highest helicity or WSS values is also determined by the iliac limbs pattern of the EG. Since current custom-made EG solutions can reproduce variability in main-body/iliac limbs length ratios, further computational as well as clinical research is warranted to delineate and predict the hemodynamic and clinical effect of variable
Energy Technology Data Exchange (ETDEWEB)
Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn
2012-12-11
Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.
Instrument uncertainty predictions
International Nuclear Information System (INIS)
Coutts, D.A.
1991-07-01
The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty
Directory of Open Access Journals (Sweden)
Julia Neelmeijer
2014-09-01
Full Text Available We use 124 scenes of TerraSAR–X data that were acquired in 2009 and 2010 to analyse the spatial and temporal variability in surface kinematics of the debris-covered Inylchek Glacier, located in the Tien Shan mountain range in Central Asia. By applying the feature tracking method to the intensity information of the radar data and combining the results from the ascending and descending orbits, we derive the surface velocity field of the glaciated area. Analysing the seasonal variations over the upper part of the Southern Inylchek branch, we find a temperature-related increase in velocity from 25 cm/d up to 50 cm/d between spring and summer, with the peak occurring in June. Another prominent velocity peak is observable one month later in the lower part of the Southern Inylchek branch. This area shows generally little motion, with values of approximately 5–10 cm/d over the year, but yields surface kinematics of up to 25 cm/d during the peak period. Comparisons of the dates of annual glacial lake outburst floods (GLOFs of the proglacial Lake Merzbacher suggest that this lower part is directly influenced by the drainage, leading to the observed mini-surge, which has over twice the normal displacement rate. With regard to the GLOF and the related response of Inylchek Glacier, we conclude that X–band radar systems such as TerraSAR–X have a high potential for detecting and characterising small-scale glacial surface kinematic variations and should be considered for future inter-annual glacial monitoring tasks.
Federal Laboratory Consortium — Provides instrumentation support for flight tests of prototype weapons systems using a vast array of airborne sensors, transducers, signal conditioning and encoding...
Baur, Albert H; Lauf, Steffen; Förster, Michael; Kleinschmit, Birgit
2015-07-01
Substantive and concerted action is needed to mitigate climate change. However, international negotiations struggle to adopt ambitious legislation and to anticipate more climate-friendly developments. Thus, stronger actions are needed from other players. Cities, being greenhouse gas emission centers, play a key role in promoting the climate change mitigation movement by becoming hubs for smart and low-carbon lifestyles. In this context, a stronger linkage between greenhouse gas emissions and urban development and policy-making seems promising. Therefore, simple approaches are needed to objectively identify crucial emission drivers for deriving appropriate emission reduction strategies. In analyzing 44 European cities, the authors investigate possible socioeconomic and spatial determinants of urban greenhouse gas emissions. Multiple statistical analyses reveal that the average household size and the edge density of discontinuous dense urban fabric explain up to 86% of the total variance of greenhouse gas emissions of EU cities (when controlled for varying electricity carbon intensities). Finally, based on these findings, a multiple regression model is presented to determine greenhouse gas emissions. It is independently evaluated with ten further EU cities. The reliance on only two indicators shows that the model can be easily applied in addressing important greenhouse gas emission sources of European urbanites, when varying power generations are considered. This knowledge can help cities develop adequate climate change mitigation strategies and promote respective policies on the EU or the regional level. The results can further be used to derive first estimates of urban greenhouse gas emissions, if no other analyses are available. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Walter Muñoz Cruz
2006-12-01
Full Text Available Se estudiaron las variables de operación de un biorreactor de tanque agitado para el cultivo de células en suspensión de Azadirachta indica A. Juss. Se utilizó carboximetilcelulosa, CMC 0,7 % p/v, para estimar el coeficiente de transferencia de oxígeno, kLa, entre 120 - 400 rpm y entre 0,05 - 0,6 vvm, obteniéndose valores de 0,5 - 8,0 h-1. El kLa para suspensiones de A. indica en erlenmeyers fue de 0,6 - 1,2 h-1. Con los resultados anteriores se definieron las condiciones de operación del biorreactor y se evaluó el crecimiento de células de A. indica a 200 rpm y 0,2 vvm de aire, alcanzando 9,2 g cel secas/l. El crecimiento celular no fue limitado por el suministro de oxígeno. Los tamaños de aglomerados celulares cultivados en erlenmeyers con bafles agitados magnéticamente y en biorreactor fueron similares, pero menores que los obtenidos en erlemeyers con agitación orbital. El presente estudio establece parámetros para la operación de biorreactores con A. indica y confirma que los medios con CMC pueden utilizarse para estimar variables operacionales en biorreactores.Operation variables of a stirred tank bioreactor were studied in order to culture cell suspension of Azadirachta indica A. Juss Carboximethylcelulose, CMC 0,7 % w/v, was used to estimate the coefficient of oxygen transfer, kLa, between 120 - 400 rpm and 0,05 - 0,6 vvm, obtaining values of 0,5 - 8,0 h-1. The kLa for suspension cultures of A. indica in erlenmeyers was 0,6 - 1,2 h-1. Based upon the previous results, the operation conditions of the bioreactor were defined and cell growth of A. indica was evaluated at 200 rpm and 0,2 vvm of air, reaching 9,2 g dry cell/l. Celular growth was not limited by dissolved oxygen. The sizes of cell agglomerates magnetically stirred in erlemeyers with bafles and in the bioreactor were similar, but smaller that those obtained in erlenmeyers with orbital agitation. The present study establishes parameters for operation of bioreactors
Estimating Marginal Returns to Education. NBER Working Paper No. 16474
Carneiro, Pedro; Heckman, James J.; Vytlacil, Edward J.
2010-01-01
This paper estimates the marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and…
Directory of Open Access Journals (Sweden)
Tixier-Boichard Michèle
2003-03-01
Full Text Available Abstract In order to investigate the possibility of using the dwarf gene for egg production, two dwarf brown-egg laying lines were selected for 16 generations on average clutch length; one line (L1 was normally feathered and the other (L2 was homozygous for the naked neck gene NA. A control line from the same base population, dwarf and segregating for the NA gene, was maintained during the selection experiment under random mating. The average clutch length was normalized using a Box-Cox transformation. Genetic variability and selection response were estimated either with the mixed model methodology, or with the classical methods for calculating genetic gain, as the deviation from the control line, and the realized heritability, as the ratio of the selection response on cumulative selection differentials. Heritability of average clutch length was estimated to be 0.42 ± 0.02, with a multiple trait animal model, whereas the estimates of the realized heritability were lower, being 0.28 and 0.22 in lines L1 and L2, respectively. REML estimates of heritability were found to decline with generations of selection, suggesting a departure from the infinitesimal model, either because a limited number of genes was involved, or their frequencies were changed. The yearly genetic gains in average clutch length, after normalization, were estimated to be 0.37 ± 0.02 and 0.33 ± 0.04 with the classical methods, 0.46 ± 0.02 and 0.43 ± 0.01 with animal model methodology, for lines L1 and L2 respectively, which represented about 30% of the genetic standard deviation on the transformed scale. Selection response appeared to be faster in line L2, homozygous for the NA gene, but the final cumulated selection response for clutch length was not different between the L1 and L2 lines at generation 16.
Makowski, Jessica K.; Chambers, Don P.; Bonin, Jennifer A.
2015-06-01
Previous studies have suggested that ocean bottom pressure (OBP) from the Gravity Recovery and Climate Experiment (GRACE) can be used to measure the depth-averaged, or barotropic, transport variability of the Antarctic Circumpolar Current (ACC). Here, we use GRACE OBP observations to calculate transport variability in a region of the southern Indian Ocean encompassing the major fronts of the ACC. We use a statistical analysis of a simulated GRACE-like data set to determine the uncertainty of the estimated transport for the 2003.0-2013.0 time period. We find that when the transport is averaged over 60° of longitude, the uncertainty (one standard error) is close to 1 Sv (1 Sv = 106 m3 s-1) for low-pass filtered transport, which is significantly smaller than the signal and lower than previous studies have found. The interannual variability is correlated with the Southern Annual mode (SAM) (0.61), but more highly correlated with circumpolar zonally averaged winds between 45°S and 65°S (0.88). GRACE transport reflects significant changes in transport between 2007 and 2009 that is observed in the zonal wind variations but not in the SAM index. We also find a statistically significant trend in transport (-1.0 ± 0.4 Sv yr-1, 90% confidence) that is correlated with a local deceleration in zonal winds related to an asymmetry in the SAM on multidecadal periods.
International Nuclear Information System (INIS)
Molina, A.; Campo, A. D. del
2011-01-01
LAI is a key factor in light and rainfall interception processes in forest stands and, for this reason, is called to play an important role in global change adaptive silviculture. Therefore, it is necessary to develop practical and operative methodologies to measure this parameter as well as simple relationships with other silviculture variables. This work has studied 1) the feasibility of LAI-2000 sensor in estimating LAI-stand when readings are taken under direct sunlight conditions; and 2) the ability of LAI in studying rainfall partitioned into throughfall (T) in an Aleppo pine stand after different thinning intensities, as well as its relationships to basal area, (G), cover (FCC), and tree density (D). Results showed that the angular correction scheme applied to LAI-2000 direct-sunlight readings stabilized them for different solar angles, allowing a better operational use of LAI-2000 in Mediterranean areas, where uniform overcast conditions are difficult to meet and predict. Forest cover showed the highest predictive ability of LAI (R 2 = 0.98; S = 0.28), then G (R 2 = 0.96; S = 0.43) and D (R 2 = 0.50; S = 0.28). In the hydrological plane, T increased with thinning intensity, being G the most explanatory variable (R 2 = 0.81; S = 3.07) and LAI the one that showed the poorest relation with it (R 2 = 0.69; S = 3.95). These results open a way for forest hydrologic modeling taking LAI as an input variable either estimated form LAI-2000 or deducted from inventory data. (Author) 36 refs.
Neutron multiplication measurement instrument
International Nuclear Information System (INIS)
Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.
1983-01-01
The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results
Directory of Open Access Journals (Sweden)
Juha Hyyppä
2010-01-01
Full Text Available In this study we compared the accuracy of low-pulse airborne laser scanning (ALS data, multi-temporal high-resolution noninterferometric TerraSAR-X radar data and a combined feature set derived from these data in the estimation of forest variables at plot level. The TerraSAR-X data set consisted of seven dual-polarized (HH/HV or VH/VV Stripmap mode images from all seasons of the year. We were especially interested in distinguishing between the tree species. The dependent variables estimated included mean volume, basal area, mean height, mean diameter and tree species-specific mean volumes. Selection of best possible feature set was based on a genetic algorithm (GA. The nonparametric k-nearest neighbour (k-NN algorithm was applied to the estimation. The research material consisted of 124 circular plots measured at tree level and located in the vicinity of Espoo, Finland. There are large variations in the elevation and forest structure in the study area, making it demanding for image interpretation. The best feature set contained 12 features, nine of them originating from the ALS data and three from the TerraSAR-X data. The relative RMSEs for the best performing feature set were 34.7% (mean volume, 28.1% (basal area, 14.3% (mean height, 21.4% (mean diameter, 99.9% (mean volume of Scots pine, 61.6% (mean volume of Norway spruce and 91.6% (mean volume of deciduous tree species. The combined feature set outperformed an ALS-based feature set marginally; in fact, the latter was better in the case of species-specific volumes. Features from TerraSAR-X alone performed poorly. However, due to favorable temporal resolution, satellite-borne radar imaging is a promising data source for updating large-area forest inventories based on low-pulse ALS.
Nonparametric instrumental regression with non-convex constraints
International Nuclear Information System (INIS)
Grasmair, M; Scherzer, O; Vanhems, A
2013-01-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition. (paper)
Nonparametric instrumental regression with non-convex constraints
Grasmair, M.; Scherzer, O.; Vanhems, A.
2013-03-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.
Directory of Open Access Journals (Sweden)
P. Forkman
2012-11-01
Full Text Available Measurements of mesospheric carbon monoxide, CO, provide important information about the dynamics in the mesosphere region since CO has a long lifetime at these altitudes. Ground-based measurements of mesospheric CO made at the Onsala Space Observatory, OSO, (57° N, 12° E are presented. The dataset covers the period 2002–2008 and is hence uniquely long for ground-based observations. The simple and stable 115 GHz frequency-switched radiometer, calibration method, retrieval procedure and error characterization are described. A comparison between our measurements and co-located CO measurements from the satellite sensors ACE-FTS on Scisat (v2.2, MLS on Aura (v3-3, MIPAS on Envisat (V3O_CO_12 + 13 and V4O_CO_200 and SMR on Odin (v225 and v021 is carried out. Our instrument, OSO, and the four satellite instruments show the same general variation of the vertical distribution of mesospheric CO in both the annual cycle and in shorter time period events, with high CO mixing ratios during winter and very low amounts during summer in the observed 55–100 km altitude range. During 2004–2008 the agreement of the OSO instrument and the satellite sensors ACE-FTS, MLS and MIPAS (200 is good in the altitude range 55–70 km. Above 70 km, OSO shows up to 25% higher CO column values compared to both ACE and MLS. For the time period 2002–2004, CO from MIPAS (12 + 13 is up to 50% lower than OSO between 55 and 70 km. Mesospheric CO from the two versions of SMR deviates up to ±65% when compared to OSO, but the analysis is based on only a few co-locations.
Forkman, P.; Christensen, O. M.; Eriksson, P.; Urban, J.; Funke, B.
2012-11-01
Measurements of mesospheric carbon monoxide, CO, provide important information about the dynamics in the mesosphere region since CO has a long lifetime at these altitudes. Ground-based measurements of mesospheric CO made at the Onsala Space Observatory, OSO, (57° N, 12° E) are presented. The dataset covers the period 2002-2008 and is hence uniquely long for ground-based observations. The simple and stable 115 GHz frequency-switched radiometer, calibration method, retrieval procedure and error characterization are described. A comparison between our measurements and co-located CO measurements from the satellite sensors ACE-FTS on Scisat (v2.2), MLS on Aura (v3-3), MIPAS on Envisat (V3O_CO_12 + 13 and V4O_CO_200) and SMR on Odin (v225 and v021) is carried out. Our instrument, OSO, and the four satellite instruments show the same general variation of the vertical distribution of mesospheric CO in both the annual cycle and in shorter time period events, with high CO mixing ratios during winter and very low amounts during summer in the observed 55-100 km altitude range. During 2004-2008 the agreement of the OSO instrument and the satellite sensors ACE-FTS, MLS and MIPAS (200) is good in the altitude range 55-70 km. Above 70 km, OSO shows up to 25% higher CO column values compared to both ACE and MLS. For the time period 2002-2004, CO from MIPAS (12 + 13) is up to 50% lower than OSO between 55 and 70 km. Mesospheric CO from the two versions of SMR deviates up to ±65% when compared to OSO, but the analysis is based on only a few co-locations.
International Nuclear Information System (INIS)
Ubbes, W.F.; Yow, J.L. Jr.
1988-01-01
Instrumentation is developed for the Civilian Radioactive Waste Management Program to meet several different (and sometimes conflicting) objectives. This paper addresses instrumentation development for data needs that are related either directly or indirectly to a repository site, but does not touch on instrumentation for work with waste forms or other materials. Consequently, this implies a relatively large scale for the measurements, and an in situ setting for instrument performance. In this context, instruments are needed for site characterization to define phenomena, develop models, and obtain parameter values, and for later design and performance confirmation testing in the constructed repository. The former set of applications is more immediate, and is driven by the needs of program design and performance assessment activities. A host of general technical and nontechnical issues have arisen to challenge instrumentation development. Instruments can be classed into geomechanical, geohydrologic, or other specialty categories, but these issues cut across artificial classifications. These issues are outlined. Despite this imposing list of issues, several case histories are cited to evaluate progress in the area
Matsuda, Yuri; Kishimoto, Miori; Kushida, Kazuya; Yamada, Kazutaka; Shimizu, Miki; Itoh, Hiroshi
2017-09-01
OBJECTIVE To investigate effects of changes in analytic variables and contrast medium osmolality on glomerular filtration rate estimated by CT (CT-GFR) in dogs. ANIMALS 4 healthy anesthetized Beagles. PROCEDURES GFR was estimated by inulin clearance, and dogs underwent CT-GFR with iodinated contrast medium (iohexol or iodixanol) in a crossover-design study. Dynamic renal CT scanning was performed. Patlak plot analysis was used to calculate GFR with the renal cortex or whole kidney selected as the region of interest. The renal cortex was analyzed just prior to time of the second cortical attenuation peak. The whole kidney was analyzed 60, 80, 100, and 120 seconds after the appearance of contrast medium. Automated GFR calculations were performed with preinstalled perfusion software including 2 noise reduction levels (medium and strong). The CT-GFRs were compared with GFR estimated by inulin clearance. RESULTS There was no significant difference in CT-GFR with iohexol versus iodixanol in any analyses. The CT-GFR at the renal cortex, CT-GFR for the whole kidney 60 seconds after appearance of contrast medium, and CT-GFR calculated by perfusion software with medium noise reduction did not differ significantly from GFR estimated by inulin clearance. The CT-GFR was underestimated at ≥ 80 seconds after contrast medium appearance (whole kidney) and when strong noise reduction was used with perfusion CT software. CONCLUSIONS AND CLINICAL RELEVANCE Selection of the renal cortex as region of interest or use of the 60-second time point for whole-kidney evaluation yielded the best CT-GFR results. The perfusion software used produced good results with appropriate noise reduction. IMPACT FOR HUMAN MEDICINE The finding that excessive noise reduction caused underestimation of CT-GFR suggests that this factor should also be considered in CT-GFR examination of human patients.
Bergmann-Wolf, Inga; Dobslaw, Henryk
2016-04-01
Estimating global barystatic sea-level variations from monthly mean gravity fields delivered by the Gravity Recovery and Climate Experiment (GRACE) satellite mission requires additional information about geocenter motion. These variations are not available directly due to the mission implementation in the CM-frame and are represented by the degree-1 terms of the spherical harmonics expansion. Global degree-1 estimates can be determined with the method of Swenson et al. (2008) from ocean mass variability, the geometry of the global land-sea distribution, and GRACE data of higher degrees and orders. Consequently, a recursive relation between the derivation of ocean mass variations from GRACE data and the introduction of geocenter motion into GRACE data exists. In this contribution, we will present a recent improvement to the processing strategy described in Bergmann-Wolf et al. (2014) by introducing a non-homogeneous distribution of global ocean mass variations in the geocenter motion determination strategy, which is due to the effects of loading and self-attraction induced by mass redistributions at the surface. A comparison of different GRACE-based oceanographic products (barystatic signal for both the global oceans and individual basins; barotropic transport variations of major ocean currents) with degree-1 terms estimated with a homogeneous and non-homogeneous ocean mass representation will be discussed, and differences in noise levels in most recent GRACE solutions from GFZ (RL05a), CSR, and JPL (both RL05) and their consequences for the application of this method will be discussed. Swenson, S., D. Chambers and J. Wahr (2008), Estimating geocenter variations from a combination of GRACE and ocean model output, J. Geophys. Res., 113, B08410 Bergmann-Wolf, I., L. Zhang and H. Dobslaw (2014), Global Eustatic Sea-Level Variations for the Approximation of Geocenter Motion from GRACE, J. Geod. Sci., 4, 37-48
DEFF Research Database (Denmark)
Holdensen, Lars; Hauggaard-Nielsen, Henrik; Jensen, Erik Steen
2007-01-01
abundance in spring barley and N2-fixing pea was measured within the 0.15-4 m scale at flowering and at maturity. The short-range spatial variability of soil δ15N natural abundance and symbiotic nitrogen fixation were high at both growth stages. Along a 4-m row, the δ15N natural abundance in barley......-abundance are that estimates of symbiotic N2-fixation can be obtained from the natural abundance method if at least half a square meter of crop and reference plants is sampled for the isotopic analysis. In fields with small amounts of representative reference crops (weeds) it might be necessary to sow in reference crop...
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung Jae; Seo, Seong Gyu
1995-03-15
This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.
International Nuclear Information System (INIS)
Kim, Seung Jae; Seo, Seong Gyu
1995-03-01
This textbook deals with instrumental analysis, which consists of nine chapters. It has Introduction of analysis chemistry, the process of analysis and types and form of the analysis, Electrochemistry on basic theory, potentiometry and conductometry, electromagnetic radiant rays and optical components on introduction and application, Ultraviolet rays and Visible spectrophotometry, Atomic absorption spectrophotometry on introduction, flame emission spectrometry and plasma emission spectrometry. The others like infrared spectrophotometry, X-rays spectrophotometry and mass spectrometry, chromatography and the other instrumental analysis like radiochemistry.
International Nuclear Information System (INIS)
Bixby, W.W.
1979-01-01
A description of instrumentation used in the Loss-of-Fluid Test (LOFT) large break Loss-of-Coolant Experiments is presented. Emphasis is placed on hydraulic and thermal measurements in the primary system piping and components, reactor vessel, and pressure suppression system. In addition, instrumentation which is being considered for measurement of phenomena during future small break testing is discussed. (orig.) 891 HP/orig. 892 BRE [de
Gottschalk, Fadri; Nowack, Bernd
2013-01-01
This article presents a method of probabilistically computing species sensitivity distributions (SSD) that is well-suited to cope with distinct data scarcity and variability. First, a probability distribution that reflects the uncertainty and variability of sensitivity is modeled for each species considered. These single species sensitivity distributions are then combined to create an SSD for a particular ecosystem. A probabilistic estimation of the risk is carried out by combining the probability of critical environmental concentrations with the probability of organisms being impacted negatively by these concentrations. To evaluate the performance of the method, we developed SSD and risk calculations for the aquatic environment exposed to triclosan. The case studies showed that the probabilistic results reflect the empirical information well, and the method provides a valuable alternative or supplement to more traditional methods for calculating SSDs based on averaging raw data and/or on using theoretical distributional forms. A comparison and evaluation with single SSD values (5th-percentile [HC5]) revealed the robustness of the proposed method. Copyright © 2012 SETAC.
Mears, Jessica; Abubakar, Ibrahim; Cohen, Theodore; McHugh, Timothy D; Sonnenberg, Pam
2015-01-21
To systematically review the evidence for the impact of study design and setting on the interpretation of tuberculosis (TB) transmission using clustering derived from Mycobacterial Interspersed Repetitive Units-Variable Number Tandem Repeats (MIRU-VNTR) strain typing. MEDLINE, EMBASE, CINHAL, Web of Science and Scopus were searched for articles published before 21st October 2014. Studies in humans that reported the proportion of clustering of TB isolates by MIRU-VNTR were included in the analysis. Univariable meta-regression analyses were conducted to assess the influence of study design and setting on the proportion of clustering. The search identified 27 eligible articles reporting clustering between 0% and 63%. The number of MIRU-VNTR loci typed, requiring consent to type patient isolates (as a proxy for sampling fraction), the TB incidence and the maximum cluster size explained 14%, 14%, 27% and 48% of between-study variation, respectively, and had a significant association with the proportion of clustering. Although MIRU-VNTR typing is being adopted worldwide there is a paucity of data on how study design and setting may influence estimates of clustering. We have highlighted study design variables for consideration in the design and interpretation of future studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Estimating the effects of wages on obesity.
Kim, DaeHwan; Leigh, John Paul
2010-05-01
To estimate the effects of wages on obesity and body mass. Data on household heads, aged 20 to 65 years, with full-time jobs, were drawn from the Panel Study of Income Dynamics for 2003 to 2007. The Panel Study of Income Dynamics is a nationally representative sample. Instrumental variables (IV) for wages were created using knowledge of computer software and state legal minimum wages. Least squares (linear regression) with corrected standard errors were used to estimate the equations. Statistical tests revealed both instruments were strong and tests for over-identifying restrictions were favorable. Wages were found to be predictive (P low wages increase obesity prevalence and body mass.
Stafoggia, Massimo; Schwartz, Joel; Badaloni, Chiara; Bellander, Tom; Alessandrini, Ester; Cattani, Giorgio; De' Donato, Francesca; Gaeta, Alessandra; Leone, Gianluca; Lyapustin, Alexei; Sorek-Hamer, Meytar; de Hoogh, Kees; Di, Qian; Forastiere, Francesco; Kloog, Itai
2017-02-01
Health effects of air pollution, especially particulate matter (PM), have been widely investigated. However, most of the studies rely on few monitors located in urban areas for short-term assessments, or land use/dispersion modelling for long-term evaluations, again mostly in cities. Recently, the availability of finely resolved satellite data provides an opportunity to estimate daily concentrations of air pollutants over wide spatio-temporal domains. Italy lacks a robust and validated high resolution spatio-temporally resolved model of particulate matter. The complex topography and the air mixture from both natural and anthropogenic sources are great challenges difficult to be addressed. We combined finely resolved data on Aerosol Optical Depth (AOD) from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, ground-level PM 10 measurements, land-use variables and meteorological parameters into a four-stage mixed model framework to derive estimates of daily PM 10 concentrations at 1-km2 grid over Italy, for the years 2006-2012. We checked performance of our models by applying 10-fold cross-validation (CV) for each year. Our models displayed good fitting, with mean CV-R2=0.65 and little bias (average slope of predicted VS observed PM 10 =0.99). Out-of-sample predictions were more accurate in Northern Italy (Po valley) and large conurbations (e.g. Rome), for background monitoring stations, and in the winter season. Resulting concentration maps showed highest average PM 10 levels in specific areas (Po river valley, main industrial and metropolitan areas) with decreasing trends over time. Our daily predictions of PM 10 concentrations across the whole Italy will allow, for the first time, estimation of long-term and short-term effects of air pollution nationwide, even in areas lacking monitoring data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Wenjuan Li
2015-11-01
Full Text Available The leaf area index (LAI and the fraction of photosynthetically active radiation absorbed by green vegetation (FAPAR are essential climatic variables in surface process models. FCOVER is also important to separate vegetation and soil for energy balance processes. Currently, several LAI, FAPAR and FCOVER satellite products are derived moderate to coarse spatial resolution. The launch of Sentinel-2 in 2015 will provide data at decametric resolution with a high revisit frequency to allow quantifying the canopy functioning at the local to regional scales. The aim of this study is thus to evaluate the performances of a neural network based algorithm to derive LAI, FAPAR and FCOVER products at decametric spatial resolution and high temporal sampling. The algorithm is generic, i.e., it is applied without any knowledge of the landcover. A time series of high spatial resolution SPOT4_HRVIR (16 scenes and Landsat 8 (18 scenes images acquired in 2013 over the France southwestern site were used to generate the LAI, FAPAR and FCOVER products. For each sensor and each biophysical variable, a neural network was first trained over PROSPECT+SAIL radiative transfer model simulations of top of canopy reflectance data for green, red, near-infra red and short wave infra-red bands. Our results show a good spatial and temporal consistency between the variables derived from both sensors: almost half the pixels show an absolute difference between SPOT and LANDSAT estimates of lower that 0.5 unit for LAI, and 0.05 unit for FAPAR and FCOVER. Finally, downward-looking digital hemispherical cameras were completed over the main land cover types to validate the accuracy of the products. Results show that the derived products are strongly correlated with the field measurements (R2 > 0.79, corresponding to a RMSE = 0.49 for LAI, RMSE = 0.10 (RMSE = 0.12 for black-sky (white sky FAPAR and RMSE = 0.15 for FCOVER. It is concluded that the proposed generic algorithm provides a good
Pal, S.; De Wekker, S.; Emmitt, G. D.
2013-12-01
We present first results of the spatio-temporal variability of atmospheric boundary layer depths obtained with a suite of ground-based and airborne instruments deployed during the first field phase of The Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program (http://www3.nd.edu/~dynamics/materhorn/index.php) at Dugway Proving Ground (DPG, Utah, USA) in Fall 2012. We mainly use high-resolution data collected on selected intensive observation periods obtained by Doppler lidars, ceilometer, and in-situ measurements from an unmanned aerial vehicle for the measurements of atmospheric boundary layer (ABL) depths. In particular, a Navy Twin Otter aircraft flew 6 missions of about 5 hours each during the daytime, collecting remotely sensed (Doppler lidar, TODWL) wind data in addition to in-situ turbulence measurements which allowed a detailed investigation of the spatial heterogeneity of the convective boundary layer turbulence features over a steep isolated mountain of a horizontal and vertical scale of about 10 km and 1 km, respectively. Additionally, we use data collected by (1) radiosonde systems at two sites of Granite Mountain area in DPG (Playa and Sagebrush), (2) sonic anemometers (CSAT-3D) for high resolution turbulence flux measurements near ground, (3) Pyranometer for incoming solar radiation, and (4) standard meteorological measurements (PTU) obtained near the surface. In this contribution, we discuss and address (1) composites obtained with lidar, ceilometer, micro-meteorological measurements, and radiosonde observations to determine the quasi-continuous regime of ABL depths, growth rates, maximum convective boundary layer (CBL) depths, etc., (2) the temporal variability in the ABL depths during entire diurnal cycle and the spatial heterogeneity in the daytime ABL depths triggered by the underlying orography in the experimental area to investigate the most possible mechanisms (e.g. combined effect of diurnal cycle and orographic trigger
Directory of Open Access Journals (Sweden)
Gabriel Valerio
2007-07-01
Full Text Available During the history of human kind, since our first ancestors, tools have represented a mean to reach objectives which might otherwise seemed impossibles. In the called New Economy, where tangibles assets appear to be losing the role as the core element to produce value versus knowledge, tools have kept aside man in his dairy work. In this article, the author's objective is to describe, in a simple manner, the importance of managing the organization's group of tools or instruments (Instrumental Capital. The characteristic conditions of this New Economy, the way Knowledge Management deals with these new conditions and the sub-processes that provide support to the management of Instrumental Capital are described.
International Nuclear Information System (INIS)
Anon.
1983-01-01
At this year's particle physics conference at Brighton, a parallel session was given over to instrumentation and detector development. While this work is vital to the health of research and its continued progress, its share of prime international conference time is limited. Instrumentation can be innovative three times — first when a new idea is outlined, secondly when it is shown to be feasible, and finally when it becomes productive in a real experiment, amassing useful data rather than operational experience. Hyams' examples showed that it can take a long time for a new idea to filter through these successive stages, if it ever makes it at all
Energy Technology Data Exchange (ETDEWEB)
Anon.
1983-11-15
At this year's particle physics conference at Brighton, a parallel session was given over to instrumentation and detector development. While this work is vital to the health of research and its continued progress, its share of prime international conference time is limited. Instrumentation can be innovative three times — first when a new idea is outlined, secondly when it is shown to be feasible, and finally when it becomes productive in a real experiment, amassing useful data rather than operational experience. Hyams' examples showed that it can take a long time for a new idea to filter through these successive stages, if it ever makes it at all.
Directory of Open Access Journals (Sweden)
Qureshi Navid
2017-01-01
Full Text Available Every neutron scattering experiment requires the choice of a suited neutron diffractometer (or spectrometer in the case of inelastic scattering with its optimal configuration in order to accomplish the experimental tasks in the most successful way. Most generally, the compromise between the incident neutron flux and the instrumental resolution has to be considered, which is depending on a number of optical devices which are positioned in the neutron beam path. In this chapter the basic instrumental principles of neutron diffraction will be explained. Examples of different types of experiments and their respective expectable results will be shown. Furthermore, the production and use of polarized neutrons will be stressed.
International Nuclear Information System (INIS)
Song, Ung Sup; Kim, Hee Moon; Park, Dae Gyu; Paik, Seung Je; Lee, Hong Gi; Choo, Yong Sun; Hong Kwon Pyo
2004-01-01
Many experimental inspection have been performed to obtain the burnup of fuel. In the case, chemical analysis were popular with high reliability. High radioactivity of fuel was severe problem during destructive procedure. Afterward, many researchers have studied calculation of burnup using gamma detector as the non-destructive method. methodologies of gamma-scanning test have been developed as well as higher accuracy of detector. Generally, Cs-137 and Cs-134 are standard isotopes for long-term cooling spent fuel to estimate burnup, because atomic ratio of them follows the linearity with burnup
International Nuclear Information System (INIS)
Alderliesten, Tanja; Betgen, Anja; Elkhuizen, Paula H.M.; Vliet-Vroegindeweij, Corine van; Remeijer, Peter
2013-01-01
Purpose: To investigate the heart position variability in deep-inspiration breath-hold (DIBH) radiation therapy (RT) for breast cancer when 3D surface imaging would be used for monitoring the BH depth during treatment delivery. For this purpose, surface setup data were compared with heart setup data. Materials and methods: Twenty patients treated with DIBH-RT after breast-conserving surgery were included. Retrospectively, heart registrations were performed for cone-beam computed tomography (CBCT) to planning CT. Further, breast-surface registrations were performed for a surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. Furthermore, geometric uncertainties of the heart (systematic [Σ] and random [σ]) were estimated relative to the surface registration. Based on these uncertainties planning organ at risk volume (PRV) margins for the heart were calculated: 1.3Σ − 0.5σ. Results: Moderate correlation between surface and heart setup errors was found: R 2 = 0.64, 0.37, 0.53 in left–right (LR), cranio-caudal (CC), and in anterior–posterior (AP) direction, respectively. When surface imaging would be used for monitoring, the geometric uncertainties of the heart (cm) are [Σ = 0.14, σ = 0.14]; [Σ = 0.66, σ = 0.38]; [Σ = 0.27, σ = 0.19] in LR; CC; AP. This results in PRV margins of 0.11; 0.67; 0.25 cm in LR; CC; AP. Conclusion: When DIBH-RT after breast-conserving surgery is guided by the breast-surface position then PRV margins should be used to take into account the heart-position variability relative to the breast-surface
Xu, J.-W.; Martin, R. V.; van Donkelaar, A.; Kim, J.; Choi, M.; Zhang, Q.; Geng, G.; Liu, Y.; Ma, Z.; Huang, L.; Wang, Y.; Chen, H.; Che, H.; Lin, P.; Lin, N.
2015-11-01
We determine and interpret fine particulate matter (PM2.5) concentrations in eastern China for January to December 2013 at a horizontal resolution of 6 km from aerosol optical depth (AOD) retrieved from the Korean geostationary ocean color imager (GOCI) satellite instrument. We implement a set of filters to minimize cloud contamination in GOCI AOD. Evaluation of filtered GOCI AOD with AOD from the Aerosol Robotic Network (AERONET) indicates significant agreement with mean fractional bias (MFB) in Beijing of 6.7 % and northern Taiwan of -1.2 %. We use a global chemical transport model (GEOS-Chem) to relate the total column AOD to the near-surface PM2.5. The simulated PM2.5 / AOD ratio exhibits high consistency with ground-based measurements in Taiwan (MFB = -0.52 %) and Beijing (MFB = -8.0 %). We evaluate the satellite-derived PM2.5 versus the ground-level PM2.5 in 2013 measured by the China Environmental Monitoring Center. Significant agreement is found between GOCI-derived PM2.5 and in situ observations in both annual averages (r2 = 0.66, N = 494) and monthly averages (relative RMSE = 18.3 %), indicating GOCI provides valuable data for air quality studies in Northeast Asia. The GEOS-Chem simulated chemical composition of GOCI-derived PM2.5 reveals that secondary inorganics (SO42-, NO3-, NH4+) and organic matter are the most significant components. Biofuel emissions in northern China for heating increase the concentration of organic matter in winter. The population-weighted GOCI-derived PM2.5 over eastern China for 2013 is 53.8 μg m-3, with 400 million residents in regions that exceed the Interim Target-1 of the World Health Organization.
Directory of Open Access Journals (Sweden)
C. Déandreis
2012-06-01
Full Text Available This paper describes the impact on the sulfate aerosol radiative effects of coupling the radiative code of a global circulation model with a chemistry-aerosol module. With this coupling, temporal variations of sulfate aerosol concentrations influence the estimate of aerosol radiative impacts. Effects of this coupling have been assessed on net fluxes, radiative forcing and temperature for the direct and first indirect effects of sulfate.
The direct effect respond almost linearly to rapid changes in concentrations whereas the first indirect effect shows a strong non-linearity. In particular, sulfate temporal variability causes a modification of the short wave net fluxes at the top of the atmosphere of +0.24 and +0.22 W m^{−2} for the present and preindustrial periods, respectively. This change is small compared to the value of the net flux at the top of the atmosphere (about 240 W m^{−2}. The effect is more important in regions with low-level clouds and intermediate sulfate aerosol concentrations (from 0.1 to 0.8 μg (SO_{4} m^{−3} in our model.
The computation of the aerosol direct radiative forcing is quite straightforward and the temporal variability has little effect on its mean value. In contrast, quantifying the first indirect radiative forcing requires tackling technical issues first. We show that the preindustrial sulfate concentrations have to be calculated with the same meteorological trajectory used for computing the present ones. If this condition is not satisfied, it introduces an error on the estimation of the first indirect radiative forcing. Solutions are proposed to assess radiative forcing properly. In the reference method, the coupling between chemistry and climate results in a global average increase of 8% in the first indirect radiative forcing. This change reaches 50% in the most sensitive regions. However, the reference method is not suited to run long climate
Dankelman, J.; Horeman, T.
2009-01-01
The present invention relates to a surgical instrument for minimall-invasive surgery, comprising a handle, a shaft and an actuating part, characterised by a gastight cover surrounding the shaft, wherein the cover is provided with a coupler that has a feed- through opening with a loskable seal,
Brantley, L. Reed, Sr.; Demanche, Edna L.; Klemm, E. Barbara; Kyselka, Will; Phillips, Edwin A.; Pottenger, Francis M.; Yamamoto, Karen N.; Young, Donald B.
This booklet presents some activities to measure various weather phenomena. Directions for constructing a weather station are included. Instruments including rain gauges, thermometers, wind vanes, wind speed devices, humidity devices, barometers, atmospheric observations, a dustfall jar, sticky-tape can, detection of gases in the air, and pH of…
Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling
Raykov, Tenko; Marcoulides, George A.
2012-01-01
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
Directory of Open Access Journals (Sweden)
Luca eCaricchi
2016-04-01
Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.
Chapman, A.; Murdin, P.
2000-11-01
Although the division of the zodiac into 360° probably derives from Egypt or Assyria around 2000 BC, there is no surviving evidence of Mesopotamian cultures embodying this division into a mathematical instrument. Almost certainly, however, it was from Babylonia that the Greek geometers learned of the 360° circle, and by c. 80 BC they had incorporated it into that remarkably elaborate device gener...
International Nuclear Information System (INIS)
Anon.
1976-01-01
Areas being investigated for instrumentation improvement during low-level pollution monitoring include laser opto-acoustic spectroscopy, x-ray fluorescence spectroscopy, optical fluorescence spectroscopy, liquid crystal gas detectors, advanced forms of atomic absorption spectroscopy, electro-analytical chemistry, and mass spectroscopy. Emphasis is also directed toward development of physical methods, as opposed to conventional chemical analysis techniques for monitoring these trace amounts of pollution related to energy development and utilization
Asymptotics of diagonal elements of projection matrices under many instruments/regressors
Czech Academy of Sciences Publication Activity Database
Anatolyev, Stanislav; Yaskov, P.
2017-01-01
Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: Progres-Q24 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016
Asymptotics of diagonal elements of projection matrices under many instruments/regressors
Czech Academy of Sciences Publication Activity Database
Anatolyev, Stanislav; Yaskov, P.
2017-01-01
Roč. 33, č. 3 (2017), s. 717-738 ISSN 0266-4666 Institutional support: RVO:67985998 Keywords : instrumental variable estimation * inference * models Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 1.011, year: 2016
International Nuclear Information System (INIS)
Mack, D.A.
1976-09-01
It is essential to any research activity that accurate and efficient measurements be made for the experimental parameters under consideration for each individual experiment or test. Satisfactory measurements in turn depend upon having the necessary instruments and the capability of ensuring that they are performing within their intended specifications. This latter requirement can only be achieved by providing an adequate maintenance facility, staffed with personnel competent to understand the problems associated with instrument adjustment and repair. The Instrument Repair Shop at the Lawrence Berkeley Laboratory is designed to achieve this end. The organization, staffing and operation of this system is discussed. Maintenance policy should be based on studies of (1) preventive vs. catastrophic maintenance, (2) records indicating when equipment should be replaced rather than repaired and (3) priorities established to indicate the order in which equipment should be repaired. Upon establishing a workable maintenance policy, the staff should be instructed so that they may provide appropriate scheduled preventive maintenance, calibration and corrective procedures, and emergency repairs. The education, training and experience of the maintenance staff is discussed along with the organization for an efficient operation. The layout of the various repair shops is described in the light of laboratory space and financial constraints
Industrial instrumentation principles and design
Padmanabhan, Tattamangalam R
2000-01-01
Pneumatic, hydraulic and allied instrumentation schemes have given way to electronic schemes in recent years thanks to the rapid strides in electronics and allied areas. Principles, design and applications of such state-of-the-art instrumentation schemes form the subject matter of this book. Through representative examples, the basic building blocks of instrumentation schemes are identified and each of these building blocks discussed in terms of its design and interface characteristics. The common generic schemes synthesized with such building blocks are dealt with subsequently. This forms the scope of Part I. The focus in Part II is on application. Displacement and allied instrumentation, force and allied instrumentation and process instrumentation in terms of temperature, flow, pressure level and other common process variables are dealt with separately and exhaustively. Despite the diversity in the sensor principles and characteristics and the variety in the applications and their environments, it is possib...
Fuzzy associative memories for instrument fault detection
International Nuclear Information System (INIS)
Heger, A.S.
1996-01-01
A fuzzy logic instrument fault detection scheme is developed for systems having two or three redundant sensors. In the fuzzy logic approach the deviation between each signal pairing is computed and classified into three fuzzy sets. A rule base is created allowing the human perception of the situation to be represented mathematically. Fuzzy associative memories are then applied. Finally, a defuzzification scheme is used to find the centroid location, and hence the signal status. Real-time analyses are carried out to evaluate the instantaneous signal status as well as the long-term results for the sensor set. Instantaneous signal validation results are used to compute a best estimate for the measured state variable. The long-term sensor validation method uses a frequency fuzzy variable to determine the signal condition over a specific period. To corroborate the methodology synthetic data representing various anomalies are analyzed with both the fuzzy logic technique and the parity space approach. (Author)
Skrtic, Stanko; Cabrera, Claudia; Olsson, Marita; Schnecke, Volker; Lind, Marcus
2017-03-01
We evaluated the association between glycaemic control and the risk of heart failure (HF) in a contemporary cohort of persons followed after diagnosis of type 2 diabetes (T2D). Persons with T2D diagnosed between 1998 and 2012 were retrieved from the Clinical Practice Research Data Link in the UK and followed from diagnosis until the event of HF, mortality, drop out from the database due to any other reason, or the end of the study on 1 July 2015. The association between each of three different haemoglobin A 1C (HbA 1c ) metrics and HF was estimated using adjusted proportional hazard models. In the overall cohort (n=94 332), the increased risk for HF per 1% (10 mmol/mol) increase in HbA 1c was 1.15 (95% CI 1.13 to 1.18) for updated mean HbA 1c , and 1.06 (1.04 to 1.07) and 1.06 (1.04 to 1.08) for baseline HbA 1c and updated latest HbA 1c , respectively. When categorised, the hazard risk (HR) for the updated mean HbA 1c in relation to HF became higher than for baseline and updated latest HbA 1c above HbA 1c levels of 9%, but did not differ at lower HbA 1c levels. The updated latest variable showed an increased risk for HbA 1c <6% (42 mmol/mol) of 1.16 (1.07 to 1.25), relative category 6-7%, while the HRs for updated mean and baseline HbA 1c showed no such J-shaped pattern. Hyperglycaemia is still a risk factor for HF in persons with T2D of similar magnitude as in earlier cohorts. Such a relationship exists for current glycaemic levels, at diagnosis and the overall level but the pattern differs for these variables. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Directory of Open Access Journals (Sweden)
Jonathan E. Leightner
2012-01-01
Full Text Available The omitted variables problem is one of regression analysis’ most serious problems. The standard approach to the omitted variables problem is to find instruments, or proxies, for the omitted variables, but this approach makes strong assumptions that are rarely met in practice. This paper introduces best projection reiterative truncated projected least squares (BP-RTPLS, the third generation of a technique that solves the omitted variables problem without using proxies or instruments. This paper presents a theoretical argument that BP-RTPLS produces unbiased reduced form estimates when there are omitted variables. This paper also provides simulation evidence that shows OLS produces between 250% and 2450% more errors than BP-RTPLS when there are omitted variables and when measurement and round-off error is 1 percent or less. In an example, the government spending multiplier, , is estimated using annual data for the USA between 1929 and 2010.
Directory of Open Access Journals (Sweden)
Luis C. J. Moreira
2010-12-01
Full Text Available Em face da importância em conhecer a evapotranspiração (ET para uso racional da água na irrigação no contexto atual de escassez desse recurso, algoritmos de estimativa da ET a nível regional foram desenvolvidos utilizando-se de ferramentas de sensoriamento remoto. Este estudo objetivou aplicar o algoritmo SEBAL (Surface Energy Balance Algorithms for Land em três imagens do satélite Landsat 5, do segundo semestre de 2006. As imagens correspondem a áreas irrigadas, floresta nativa densa e a Caatinga do Estado do Ceará (Baixo Acaraú, Chapada do Apodi e Chapada do Araripe. Este algoritmo calcula a evapotranspiração horária a partir do fluxo de calor latente, estimado como resíduo do balanço de energia na superfície. Os valores de ET obtidos nas três regiões foram superiores a 0,60 mm h-1 nas áreas irrigadas ou de vegetação nativa densa. As áreas de vegetação nativa menos densa apresentaram taxa da ET horária de 0,35 a 0,60 mm h-1, e valores quase nulos em áreas degradadas. A análise das médias de evapotranspiração horária pelo teste de Tukey a 5% de probabilidade permitiu evidenciar uma variabilidade significativa local, bem como regional no Estado do Ceará.In the context of water resources scarcity, the rational use of water for irrigation is necessary, implying precise estimations of the actual evapotranspiration (ET. With the recent progresses of remote-sensed technologies, regional algorithms estimating evapotranspiration from satellite observations were developed. This work aimed at applying the SEBAL algorithm (Surface Energy Balance Algorithms for Land at three Landsat-5 images during the second semester of 2006. These images cover irrigated areas, dense native forest areas and caatinga areas in three regions of the state of Ceará (Baixo Acaraú, Chapada do Apodi and Chapada do Araripe. The SEBAL algorithm calculates the hourly evapotranspiration from the latent heat flux, estimated from the surface energy
The OCO-3 Mission: Science Objectives and Instrument Performance
Eldering, A.; Basilio, R. R.; Bennett, M. W.
2017-12-01
The Orbiting Carbon Observatory 3 (OCO-3) will continue global CO2 and solar-induced chlorophyll fluorescence (SIF) using the flight spare instrument from OCO-2. The instrument is currently being tested, and will be packaged for installation on the International Space Station (ISS) (launch readiness in early 2018.) This talk will focus on the science objectives, updated simulations of the science data products, and the outcome of recent instrument performance tests. The low-inclination ISS orbit lets OCO-3 sample the tropics and sub-tropics across the full range of daylight hours with dense observations at northern and southern mid-latitudes (+/- 52º). The combination of these dense CO2 and SIF measurements provides continuity of data for global flux estimates as well as a unique opportunity to address key deficiencies in our understanding of the global carbon cycle. The instrument utilizes an agile, 2-axis pointing mechanism (PMA), providing the capability to look towards the bright reflection from the ocean and validation targets. The PMA also allows for a snapshot mapping mode to collect dense datasets over 100km by 100km areas. Measurements over urban centers could aid in making estimates of fossil fuel CO2 emissions. Similarly, the snapshot mapping mode can be used to sample regions of interest for the terrestrial carbon cycle. In addition, there is potential to utilize data from ISS instruments ECOSTRESS (ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station) and GEDI (Global Ecosystem Dynamics Investigation), which measure other key variables of the control of carbon uptake by plants, to complement OCO-3 data in science analysis. In 2017, the OCO-2 instrument was transformed into the ISS-ready OCO-3 payload. The transformed instrument was thoroughly tested and characterized. Key characteristics, such as instrument ILS, spectral resolution, and radiometric performance will be described. Analysis of direct sun measurements taken during testing
Unit Root Testing in Heteroscedastic Panels Using the Cauchy Estimator
Demetrescu, Matei; Hanck, Christoph
The Cauchy estimator of an autoregressive root uses the sign of the first lag as instrumental variable. The resulting IV t-type statistic follows a standard normal limiting distribution under a unit root case even under unconditional heteroscedasticity, if the series to be tested has no
International Nuclear Information System (INIS)
Fritschen, L.J.; Gay, L.W.
1979-01-01
This book is designed to be used as a text for advanced students and a guide or manual for researchers in the field. The purpose is to present the basic theory of environmental variables and transducers, to report experiences with methodology and use, and to provide certain essential tables. Attention is given to measurements of temperature, soil heat flux, radiation, humidity and moisture, wind speed and direction, and pressure. Data acquisition concepts are summarized
Balidoy Baloloy, Alvin; Conferido Blanco, Ariel; Gumbao Candido, Christian; Labadisos Argamosa, Reginal Jay; Lovern Caboboy Dumalag, John Bart; Carandang Dimapilis, Lee, , Lady; Camero Paringit, Enrico
2018-04-01
Aboveground biomass estimation (AGB) is essential in determining the environmental and economic values of mangrove forests. Biomass prediction models can be developed through integration of remote sensing, field data and statistical models. This study aims to assess and compare the biomass predictor potential of multispectral bands, vegetation indices and biophysical variables that can be derived from three optical satellite systems: the Sentinel-2 with 10 m, 20 m and 60 m resolution; RapidEye with 5m resolution and PlanetScope with 3m ground resolution. Field data for biomass were collected from a Rhizophoraceae-dominated mangrove forest in Masinloc, Zambales, Philippines where 30 test plots (1.2 ha) and 5 validation plots (0.2 ha) were established. Prior to the generation of indices, images from the three satellite systems were pre-processed using atmospheric correction tools in SNAP (Sentinel-2), ENVI (RapidEye) and python (PlanetScope). The major predictor bands tested are Blue, Green and Red, which are present in the three systems; and Red-edge band from Sentinel-2 and Rapideye. The tested vegetation index predictors are Normalized Differenced Vegetation Index (NDVI), Soil-adjusted Vegetation Index (SAVI), Green-NDVI (GNDVI), Simple Ratio (SR), and Red-edge Simple Ratio (SRre). The study generated prediction models through conventional linear regression and multivariate regression. Higher coefficient of determination (r2) values were obtained using multispectral band predictors for Sentinel-2 (r2 = 0.89) and Planetscope (r2 = 0.80); and vegetation indices for RapidEye (r2 = 0.92). Multivariate Adaptive Regression Spline (MARS) models performed better than the linear regression models with r2 ranging from 0.62 to 0.92. Based on the r2 and root-mean-square errors (RMSE's), the best biomass prediction model per satellite were chosen and maps were generated. The accuracy of predicted biomass maps were high for both Sentinel-2 (r2 = 0
Beam Instrumentation and Diagnostics
Strehl, Peter
2006-01-01
This treatise covers all aspects of the design and the daily operations of a beam diagnostic system for a large particle accelerator. A very interdisciplinary field, it involves contributions from physicists, electrical and mechanical engineers and computer experts alike so as to satisfy the ever-increasing demands for beam parameter variability for a vast range of operation modi and particles. The author draws upon 40 years of research and work, most of them spent as the head of the beam diagnostics group at GSI. He has illustrated the more theoretical aspects with many real-life examples that will provide beam instrumentation designers with ideas and tools for their work.
International Nuclear Information System (INIS)
1984-06-01
RFS or Regles Fondamentales de Surete (Basic Safety Rules) applicable to certain types of nuclear facilities lay down requirements with which compliance, for the type of facilities and within the scope of application covered by the RFS, is considered to be equivalent to compliance with technical French regulatory practice. The object of the RFS is to take advantage of standardization in the field of safety, while allowing for technical progress in that field. They are designed to enable the operating utility and contractors to know the rules pertaining to various subjects which are considered to be acceptable by the Service Central de Surete des Installations Nucleaires, or the SCSIN (Central Department for the Safety of Nuclear Facilities). These RFS should make safety analysis easier and lead to better understanding between experts and individuals concerned with the problems of nuclear safety. The SCSIN reserves the right to modify, when considered necessary, any RFS and specify, if need be, the terms under which a modification is deemed retroactive. The aim of this RFS is to define the type, location and operating conditions for seismic instrumentation needed to determine promptly the seismic response of nuclear power plants features important to safety to permit comparison of such response with that used as the design basis
Meteorological instrumentation
International Nuclear Information System (INIS)
1982-06-01
RFS or ''Regles Fondamentales de Surete'' (Basic Safety Rules) applicable to certain types of nuclear facilities lay down requirements with which compliance, for the type of facilities and within the scope of application covered by the RFS, is considered to be equivalent to compliance with technical French regulatory practice. The object of the RFS is to take advantage of standardization in the field of safety , while allowing for technical progress in that field. They are designed to enable the operating utility and contractors to know the rules pertaining to various subjects which are considered to be acceptable by the ''Service Central de Surete des Installations Nucleaires'' or the SCSIN (Central Department for the Safety of Nuclear Facilities). These RFS should make safety analysis easier and lead to better understanding between experts and individuals concerned with the problems of nuclear safety. The SCSIN reserves the right to modify, when considered necessary any RFS and specify, if need be, the terms under which a modification is deemed retroactive. The purpose of this RFS is to specify the meteorological instrumentation required at the site of each nuclear power plant equipped with at least one pressurized water reactor
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Christensen, Ulla; Lynch, John
2005-01-01
It is often of interest to assess how much of the effect of an exposure on a response is mediated through an intermediate variable. However, systematic approaches are lacking, other than assessment of a surrogate marker for the endpoint of a clinical trial. We review a measure of "proportion...... of several intermediate variables. Binary or categorical variables can be included directly through threshold models. We call this measure the mediation proportion, that is, the part of an exposure effect on outcome explained by a third, intermediate variable. Two examples illustrate the approach. The first...... example is a randomized clinical trial of the effects of interferon-alpha on visual acuity in patients with age-related macular degeneration. In this example, the exposure, mediator and response are all binary. The second example is a common problem in social epidemiology-to find the proportion...
International Nuclear Information System (INIS)
Kronenberg, S.; McLaughlin, W.L.; Seibentritt, C.R. Jr.
1986-01-01
An instrument is described for measuring radiation, particularly nuclear radiation, comprising: a radiation sensitive structure pivoted toward one end and including a pair of elongated solid members contiguously joined together along their length dimensions and having a common planar interface therebetween. One of the pairs of members is comprised of radiochromic material whose index of refraction changes due to anomolous dispersion as a result of being exposed to nuclear radiation. The pair of members further has mutually different indices of refraction with the member having the larger index of refraction further being transparent for the passage of light and of energy therethrough; means located toward the other end of the structure for varying the angle of longitudinal elevation of the pair of members; means for generating and projecting a beam of light into one end of the member having the larger index of refraction. The beam of light is projected toward the planar interface where it is reflected out of the other end of the same member as a first output beam; means projecting a portion of the beam of light into one end of the member having the larger index of refraction where it traverses therethrough without reflection and out of the other end of the same member as a second output beam; and means adjacent the structure for receiving the first and second output beams, whereby a calibrated change in the angle of elevation of the structure between positions of equal intensity of the first and second output beams prior to and following exposure provides a measure of the radiation sensed due to a change of refraction of the radiochromic material
Directory of Open Access Journals (Sweden)
Sudipta Bhattacharya
2018-06-01
Full Text Available Recurrent adverse events, once occur often continue for some duration of time in clinical trials; and the number of events along with their durations is clinically considered as a measure of severity of a disease under study. While there are methods available for analyzing recurrent events or durations or for analyzing both side by side, no effort has been made so far to combine them and present as a single measure. However, this single-valued combined measure may help clinicians assess the wholesome effect of recurrence of incident comprising events and durations. Non-parametric approach is adapted here to develop an estimator for estimating the combined rate of both, the recurrence of events as well as the event-continuation, that is the duration per event. The proposed estimator produces a single numerical value, the interpretation and meaningfulness of which are discussed through the analysis of a real-life clinical dataset. The algebraic expression of variance is derived, asymptotic normality of the estimator is noted, and demonstration is provided on how the estimator can be used in the setup of testing of statistical hypothesis. Further possible development of the estimator is also noted, to adjust for the dependence of event occurrences on the history of the process generating recurrent events through covariates and for the case of dependent censoring. Keywords: Recurrent events, Duration per event, Intensity, Nelson-aalen estimator
Energy Technology Data Exchange (ETDEWEB)
Verhoef, J.P.; Leendertse, G.P. [ECN Wind, Petten (Netherlands)
2001-04-01
This document presents the literature survey results on Identification, Specification and Estimation (ISE) techniques for variables within the SiteParIden project. Besides an overview of the different general techniques also an overview is given on EU funded wind energy projects where some of these techniques have been applied more specifically. The main problem in applications like power performance assessment and site calibration is to establish an appropriate model for predicting the considered dependent variable with the aid of measured independent (explanatory) variables. In these applications detailed knowledge on what the relevant variables are and how their precise appearance in the model would be is typically missing. Therefore, the identification (of variables) and the specification (of the model relation) are important steps in the model building phase. For the determination of the parameters in the model a reliable variable estimation technique is required. In EU funded wind energy projects the linear regression technique is the most commonly applied tool for the estimation step. The linear regression technique may fail in finding reliable parameter estimates when the model variables are strongly correlated, either due to the experimental set-up or because of their particular appearance in the model. This situation of multicollinearity sometimes results in unrealistic parameter values, e.g. with the wrong algebraic sign. It is concluded that different approaches, like multi-binning can provide a better way of identifying the relevant variables. However further research in these applications is needed and it is recommended that alternative methods (neural networks, singular value decomposition etc.) should also be tested on their usefulness in a succeeding project. Increased interest in complex terrains, as feasible locations for wind farms, has also emphasised the need for adequate models. A common standard procedure to prescribe the statistical
Neutron-multiplication measurement instrument
Energy Technology Data Exchange (ETDEWEB)
Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.
1982-01-01
The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results.
Neutron-multiplication measurement instrument
International Nuclear Information System (INIS)
Nixon, K.V.; Dowdy, E.J.; France, S.W.; Millegan, D.R.; Robba, A.A.
1982-01-01
The Advanced Nuclear Technology Group of the Los Alamos National Laboratory is now using intelligent data-acquisition and analysis instrumentation for determining the multiplication of nuclear material. Earlier instrumentation, such as the large NIM-crate systems, depended on house power and required additional computation to determine multiplication or to estimate error. The portable, battery-powered multiplication measurement unit, with advanced computational power, acquires data, calculates multiplication, and completes error analysis automatically. Thus, the multiplication is determined easily and an available error estimate enables the user to judge the significance of results
DEFF Research Database (Denmark)
Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke
2012-01-01
BACKGROUND:Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent...... steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. RESULTS...... to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. CONCLUSION:Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control...
Hayashi, Yoshihiro; Otoguro, Saori; Miura, Takahiro; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2014-01-01
A multivariate statistical technique was applied to clarify the causal correlation between variables in the manufacturing process and the residual stress distribution of tablets. Theophylline tablets were prepared according to a Box-Behnken design using the wet granulation method. Water amounts (X1), kneading time (X2), lubricant-mixing time (X3), and compression force (X4) were selected as design variables. The Drucker-Prager cap (DPC) model was selected as the method for modeling the mechanical behavior of pharmaceutical powders. Simulation parameters, such as Young's modulus, Poisson rate, internal friction angle, plastic deformation parameters, and initial density of the powder, were measured. Multiple regression analysis demonstrated that the simulation parameters were significantly affected by process variables. The constructed DPC models were fed into the analysis using the finite element method (FEM), and the mechanical behavior of pharmaceutical powders during the tableting process was analyzed using the FEM. The results of this analysis revealed that the residual stress distribution of tablets increased with increasing X4. Moreover, an interaction between X2 and X3 also had an effect on shear and the x-axial residual stress of tablets. Bayesian network analysis revealed causal relationships between the process variables, simulation parameters, residual stress distribution, and pharmaceutical responses of tablets. These results demonstrated the potential of the FEM as a tool to help improve our understanding of the residual stress of tablets and to optimize process variables, which not only affect tablet characteristics, but also are risks of causing tableting problems.
Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun
2018-03-13
Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location
Directory of Open Access Journals (Sweden)
Vlasta Bari
2014-09-01
Full Text Available Entropy-based complexity of cardiovascular variability at short time scales is largely dependent on the noise and/or action of neural circuits operating at high frequencies. This study proposes a technique for canceling fast variations from cardiovascular variability, thus limiting the effect of these overwhelming influences on entropy-based complexity. The low-pass filtering approach is based on the computation of the fastest intrinsic mode function via empirical mode decomposition (EMD and its subtraction from the original variability. Sample entropy was exploited to estimate complexity. The procedure was applied to heart period (HP and QT (interval from Q-wave onset to T-wave end variability derived from 24-hour Holter recordings in 14 non-mutation carriers (NMCs and 34 mutation carriers (MCs subdivided into 11 asymptomatic MCs (AMCs and 23 symptomatic MCs (SMCs. All individuals belonged to the same family developing long QT syndrome type 1 (LQT1 via KCNQ1-A341V mutation. We found that complexity indexes computed over EMD-filtered QT variability differentiated AMCs from NMCs and detected the effect of beta-blocker therapy, while complexity indexes calculated over EMD-filtered HP variability separated AMCs from SMCs. The EMD-based filtering method enhanced features of the cardiovascular control that otherwise would have remained hidden by the dominant presence of noise and/or fast physiological variations, thus improving classification in LQT1.
Park, Tae-Ryong; Brooks, John M; Chrischilles, Elizabeth A; Bergus, George
2008-01-01
Contrast methods to assess the health effects of a treatment rate change when treatment benefits are heterogeneous across patients. Antibiotic prescribing for children with otitis media (OM) in Iowa Medicaid is the empirical example. Instrumental variable (IV) and linear probability model (LPM) are used to estimate the effect of antibiotic treatments on cure probabilities for children with OM in Iowa Medicaid. Local area physician supply per capita is the instrument in the IV models. Estimates are contrasted in terms of their ability to make inferences for patients whose treatment choices may be affected by a change in population treatment rates. The instrument was positively related to the probability of being prescribed an antibiotic. LPM estimates showed a positive effect of antibiotics on OM patient cure probability while IV estimates showed no relationship between antibiotics and patient cure probability. Linear probability model estimation yields the average effects of the treatment on patients that were treated. IV estimation yields the average effects for patients whose treatment choices were affected by the instrument. As antibiotic treatment effects are heterogeneous across OM patients, our estimates from these approaches are aligned with clinical evidence and theory. The average estimate for treated patients (higher severity) from the LPM model is greater than estimates for patients whose treatment choices are affected by the instrument (lower severity) from the IV models. Based on our IV estimates it appears that lowering antibiotic use in OM patients in Iowa Medicaid did not result in lost cures.
E. Cristiano; M.-C. ten Veldhuis; N. van de Giesen
2017-01-01
In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological res...
DEFF Research Database (Denmark)
Sørensen, Flemming Brandt; Ottosen, P D
1991-01-01
The volume-weighted, mean nuclear volume (nuclear vv) may be estimated without any assumptions regarding nuclear shape using modern stereological techniques. As a part of an investigation concerning the prospects of nuclear vv for classification and malignancy grading of cutaneous melanocytic tum...
Casper, Patricia A.; Kantowitz, Barry H.
1988-01-01
Multiple approaches are necessary for understanding and measuring workload. In particular, physiological systems identifiable by employing cardiac measures are related to cognitive systems. One issue of debate in measuring cardiac output is the grain of analysis used in recording and summarizing data. Various experiments are reviewed, the majority of which were directed at supporting or contradicting Lacey's intake-rejection hypothesis. Two of the experiments observed heart rate in operational environments and found virtually no changes associated with mental load. The major problems facing researchers using heart rate variability, or sinus arrhthmia, as a dependent measure have been associated with valid and sensitive scoring and preventing contamination of observed results by influences unrelated to cognition. Spectral analysis of heart rate variability offers two useful procedures: analysis from the time domain and analysis from the frequency domain. Most recently, data have been collected in a divided attention experiment, the performance measures and cardiac measures of which are detailed.
B Brahmantiyo; L.H Prasetyo; A.R Setioko; R.H Mulyono
2003-01-01
A study on morphological body conformation of Alabio, Bali, Khaki Campbell, Mojosari and Pegagan ducks was carried out to determine the genetic distance and discriminant variables. This research was held in Research Institute for Animal Production, Ciawi, Bogor using 65 Alabio ducks, 40 Bali ducks, 36 Khaki Campbell ducks, 60 Mojosari ducks and 30 Pegagan ducks. Seven different body parts were measured, they were the length of femur, tibia, tarsometatarsus, the circumference of tarsometatarsu...
DEFF Research Database (Denmark)
Phuong, H N; Martin, O; de Boer, I J M
2015-01-01
, body reserve usage, and growth for different genotypes of cow. Moreover, it can be used to separate genetic variability in performance between individual cows from environmental noise. The model enables simulation of the effects of a genetic selection strategy on lifetime efficiency of individual cows......, which has a main advantage of including the rearing costs, and thus, can be used to explore the impact of future selection on animal performance and efficiency....
Ohsawa, Takeo
2015-01-01
The purpose of this monograph is to present the current status of a rapidly developing part of several complex variables, motivated by the applicability of effective results to algebraic geometry and differential geometry. Highlighted are the new precise results on the L² extension of holomorphic functions. In Chapter 1, the classical questions of several complex variables motivating the development of this field are reviewed after necessary preparations from the basic notions of those variables and of complex manifolds such as holomorphic functions, pseudoconvexity, differential forms, and cohomology. In Chapter 2, the L² method of solving the d-bar equation is presented emphasizing its differential geometric aspect. In Chapter 3, a refinement of the Oka–Cartan theory is given by this method. The L² extension theorem with an optimal constant is included, obtained recently by Z. Błocki and by Q.-A. Guan and X.-Y. Zhou separately. In Chapter 4, various results on the Bergman kernel are presented, includi...
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-11-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the
DEFF Research Database (Denmark)
Temming, Axel; Andersen, Niels Gerner
1994-01-01
S=stomach content, T=time after ingestion, R and B=constants. This model allows for various curve shapes, including linear (B=O) and exponential (B=1), and the curve shape (B) was estimated from the data. Meal size was included in the model by modifying the constant R=R′ × MD, with M=meal size...... in weight and D=constant. When meal size was included in the model, the resulting B values were strongly dependent on the food type and the estimated D values were negatively correlated with B: capelin, B=1.37, D= -1.16; herring, B=0.84, D= -0.57; and prawn, B=0.35, D= -0.14. When meal size was excluded...
Directory of Open Access Journals (Sweden)
Szi-Wen Chen
Full Text Available In this paper, a reweighted ℓ1-minimization based Compressed Sensing (CS algorithm incorporating the Integral Pulse Frequency Modulation (IPFM model for spectral estimation of HRV is introduced. Knowing as a novel sensing/sampling paradigm, the theory of CS asserts certain signals that are considered sparse or compressible can be possibly reconstructed from substantially fewer measurements than those required by traditional methods. Our study aims to employ a novel reweighted ℓ1-minimization CS method for deriving the spectrum of the modulating signal of IPFM model from incomplete RR measurements for HRV assessments. To evaluate the performance of HRV spectral estimation, a quantitative measure, referred to as the Percent Error Power (PEP that measures the percentage of difference between the true spectrum and the spectrum derived from the incomplete RR dataset, was used. We studied the performance of spectral reconstruction from incomplete simulated and real HRV signals by experimentally truncating a number of RR data accordingly in the top portion, in the bottom portion, and in a random order from the original RR column vector. As a result, for up to 20% data truncation/loss the proposed reweighted ℓ1-minimization CS method produced, on average, 2.34%, 2.27%, and 4.55% PEP in the top, bottom, and random data-truncation cases, respectively, on Autoregressive (AR model derived simulated HRV signals. Similarly, for up to 20% data loss the proposed method produced 5.15%, 4.33%, and 0.39% PEP in the top, bottom, and random data-truncation cases, respectively, on a real HRV database drawn from PhysioNet. Moreover, results generated by a number of intensive numerical experiments all indicated that the reweighted ℓ1-minimization CS method always achieved the most accurate and high-fidelity HRV spectral estimates in every aspect, compared with the ℓ1-minimization based method and Lomb's method used for estimating the spectrum of HRV from
Directory of Open Access Journals (Sweden)
R. Zhuravlev
2011-10-01
Full Text Available In this work we propose an approach to solving a source estimation problem based on representation of carbon dioxide surface emissions as a linear combination of a finite number of pre-computed empirical orthogonal functions (EOFs. We used National Institute for Environmental Studies (NIES transport model for computing response functions and Kalman filter for estimating carbon dioxide emissions. Our approach produces results similar to these of other models participating in the TransCom3 experiment.
Using the EOFs we can estimate surface fluxes at higher spatial resolution, while keeping the dimensionality of the problem comparable with that in the regions approach. This also allows us to avoid potentially artificial sharp gradients in the fluxes in between pre-defined regions. EOF results generally match observations more closely given the same error structure as the traditional method.
Additionally, the proposed approach does not require additional effort of defining independent self-contained emission regions.
Directory of Open Access Journals (Sweden)
B Brahmantiyo
2003-03-01
Full Text Available A study on morphological body conformation of Alabio, Bali, Khaki Campbell, Mojosari and Pegagan ducks was carried out to determine the genetic distance and discriminant variables. This research was held in Research Institute for Animal Production, Ciawi, Bogor using 65 Alabio ducks, 40 Bali ducks, 36 Khaki Campbell ducks, 60 Mojosari ducks and 30 Pegagan ducks. Seven different body parts were measured, they were the length of femur, tibia, tarsometatarsus, the circumference of tarsometatarsus, the length of third digits, wing and maxilla. General Linear Models and simple discriminant analysis were used in this observation (SAS package program. Male and female Pegagan ducks had morphological size bigger than Alabio, Bali, Khaki Campbell and Mojosari ducks. Khaki Campbell ducks were mixed with Bali ducks (47.22% and Pegagan ducks from isolated location in South Sumatera were lightly mixed with Alabio and Bali. Mahalanobis genetic distance showed that Bali and Khaki Campbell ducks, also, Alabio and Mojosari ducks had similarity, with genetic distance of 1.420 and 1.548, respectively. Results from canonical analysis showed that the most discriminant variables were obtained from the length of femur, tibia and third digits.
Evaluating musical instruments
International Nuclear Information System (INIS)
Campbell, D. Murray
2014-01-01
Scientific measurements of sound generation and radiation by musical instruments are surprisingly hard to correlate with the subtle and complex judgments of instrumental quality made by expert musicians
David Huron; Neesha Anderson; Daniel Shanahan
2014-01-01
Forty-four Western-enculturated musicians completed two studies. The first group was asked to judge the relative sadness of forty-four familiar Western instruments. An independent group was asked to assess a number of acoustical properties for those same instruments. Using the estimated acoustical properties as predictor variables in a multiple regression analysis, a significant correlation was found between those properties known to contribute to sad prosody in speech and the judged sadness ...
A SHARIA RETURN AS AN ALTERNATIVE INSTRUMENT FOR MONETARY POLICY
Directory of Open Access Journals (Sweden)
Ashief Hamam
2011-09-01
Full Text Available Rapid development in Islamic financial industry has not been supported by sharia monetary policy instruments. This study looks at the possibility of sharia returns as the instrument. Using both error correction model and vector error correction model to estimate the data from 2002(1 to 2010(12, this paper finds that sharia return has the same effect as the interest rate in the demand for money. The shock effect of sharia return on broad money supply, Gross Domestic Product, and Consumer Price Index is greater than that of interest rate. In addition, these three variables are more quickly become stable following the shock of sharia return. Keywords: Sharia return, islamic financial system, vector error correction modelJEL classification numbers: E52, G15
Directory of Open Access Journals (Sweden)
Xueyan Hou
2016-10-01
Full Text Available Based on a widely used satellite precipitation product (TRMM Multi-satellite Precipitation Analysis 3B43, we analyzed the spatiotemporal variability of precipitation over the Pacific Ocean for 1998–2014 at seasonal and interannual timescales, separately, using the conventional empirical orthogonal function (EOF and investigated the seasonal patterns associated with El Niño–Southern Oscillation (ENSO cycles using season-reliant empirical orthogonal function (SEOF analysis. Lagged correlation analysis was also applied to derive the lead/lag correlations of the first two SEOF modes for precipitation with Pacific Decadal Oscillation (PDO and two types of El Niño, i.e., central Pacific (CP El Niño and eastern Pacific (EP El Niño. We found that: (1 The first two seasonal EOF modes for precipitation represent the annual cycle of precipitation variations for the Pacific Ocean and the first interannual EOF mode shows the spatiotemporal variability associated with ENSO; (2 The first SEOF mode for precipitation is simultaneously associated with the development of El Niño and most likely coincides with CP El Niño. The second SEOF mode lagged behind ENSO by one year and is associated with post-El Niño years. PDO modulates precipitation variability significantly only when ENSO occurs by strengthening and prolonging the impacts of ENSO; (3 Seasonally evolving patterns of the first two SEOF modes represent the consecutive precipitation patterns associated with the entire development of EP El Niño and the following recovery year. The most significant variation occurs over the tropical Pacific, especially in the Intertropical Convergence Zone (ITCZ and South Pacific Convergence Zone (SPCZ; (4 Dry conditions in the western basin of the warm pool and wet conditions along the ITCZ and SPCZ bands during the mature phase of El Niño are associated with warm sea surface temperatures in the central tropical Pacific, and a subtropical anticyclone dominating
Energy Technology Data Exchange (ETDEWEB)
Lafont, S.; Dedieu, G. [CESBIO (CNRS/CNES/UPS), Toulouse (France); Kergoat, L. [LET (CNRS/UPS), Toulouse (France); Chevillard, A. [CEA Saclay, Gif-sur-Yvette (France). Laboratoire des Sciences du Climat et de l' Environnement; Karstens, U. [MPI-MET, Hamburg (Germany); Kolle, O. [Max-Planck Inst. for Biogeochemistry, Jena (Germany)
2002-11-01
The Eurosiberian Carbonflux project was designed to address the feasibility of inferring the regional carbon balance over Europe and Siberia from a hierarchy of models and atmospheric CO{sub 2} measurements over the continent. Such atmospheric CO{sub 2} concentrations result from the combination of connective boundary layer dynamics, synoptic events, large-scale transport of CO{sub 2}, and regional surface fluxes and depend on the variability of these processes in time and space. In this paper we investigate the spatial and temporal variability of the land surface CO{sub 2} fluxes derived from the TURC model. This productivity model is driven by satellite NDVI and forced by ECMWF or REMO meteorology. We first present an analysis of recent CO{sub 2} flux measurements over temperate and boreal forests, which are used to update the TURC model. A strong linear relationship has been found between maximum hourly CO{sub 2} fluxes and the mean annual air temperature, showing that boreal biomes have a lower photosynthetic capacity than temperate ones. Then, model input consistency and simulated CO{sub 2} flux accuracy are evaluated against local measurements from two sites in Russia. Finally, the spatial and temporal patterns of the daily CO{sub 2} fluxes over Eurasia are analysed. We show that, during the growing season (spring and summer), the daily CO{sub 2} fluxes display characteristic spatial patterns of positive and negative fluxes at the synoptic scale. These patterns are found to correspond to cloudy areas (areas with low incoming radiation) and to follow the motion of cloud cover areas over the whole domain. As a consequence, we argue that co-variations of surface CO{sub 2} fluxes and atmospheric transport at the synoptic scale may impact CO{sub 2} concentrations over continents and need to be investigated.
Histogram Estimators of Bivariate Densities
National Research Council Canada - National Science Library
Husemann, Joyce A
1986-01-01
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...
Swami, Viren; Furnham, Adrian; Zilkha, Susan
2009-11-01
In the present study, 151 British and 151 French participants estimated their own, their parents' and their partner's overall intelligence and 13 'multiple intelligences.' In accordance with previous studies, men rated themselves as higher on almost all measures of intelligence, but there were few cross-national differences. There were also important sex differences in ratings of parental and partner intelligence. Participants generally believed they were more intelligent than their parents but not their partners. Regressions indicated that participants believed verbal, logical-mathematical, and spatial intelligence to be the main predictors of intelligence. Regressions also showed that participants' Big Five personality scores (in particular, Extraversion and Openness), but not values or beliefs about intelligence and intelligences tests, were good predictors of intelligence. Results were discussed in terms of the influence of gender-role stereotypes.
Directory of Open Access Journals (Sweden)
Héraud-Bousquet Vanina
2012-10-01
Full Text Available Abstract Background Nearly all HIV infections in children worldwide are acquired through mother-to-child transmission (MTCT during pregnancy, labour, delivery or breastfeeding. The objective of our study was to estimate the number and rate of new HIV diagnoses in children less than 13 years of age in mainland France from 2003–2006. Methods We performed a capture-recapture analysis based on three sources of information: the mandatory HIV case reporting (DOVIH, the French Perinatal Cohort (ANRS-EPF and a laboratory-based surveillance of HIV (LaboVIH. The missing values of a variable of heterogeneous catchability were estimated through multiple imputation. Log-linear modelling provided estimates of the number of new HIV infections in children, taking into account dependencies between sources and variables of heterogeneous catchability. Results The three sources observed 216 new HIV diagnoses after record-linkage. The number of new HIV diagnoses in children was estimated at 387 (95%CI [271–503] from 2003–2006, among whom 60% were born abroad. The estimated rate of new HIV diagnoses in children in mainland France was 9.1 per million in 2006 and was 38 times higher in children born abroad than in those born in France. The estimated completeness of the three sources combined was 55.8% (95% CI [42.9 – 79.7] and varied according to the source; the completeness of DOVIH (28.4% and ANRS-EPF (26.1% were lower than that of LaboVIH (33.3%. Conclusion Our study provided, for the first time, an estimated annual rate of new HIV diagnoses in children under 13 years old in mainland France. A more systematic HIV screening of pregnant women that is repeated during pregnancy among women likely to engage in risky behaviour is needed to optimise the prevention of MTCT. HIV screening for children who migrate from countries with high HIV prevalence to France could be recommended to facilitate early diagnosis and treatment.
Mason, E.
In this instrument review chapter the calibration plans of ESO IR instruments are presented and briefly reviewed focusing, in particular, on the case of ISAAC, which has been the first IR instrument at VLT and whose calibration plan served as prototype for the coming instruments.
Health physics instrument manual
International Nuclear Information System (INIS)
Gupton, E.D.
1978-08-01
The purpose of this manual is to provide apprentice health physics surveyors and other operating groups not directly concerned with radiation detection instruments a working knowledge of the radiation detection and measuring instruments in use at the Laboratory. The characteristics and applications of the instruments are given. Portable instruments, stationary instruments, personnel monitoring instruments, sample counters, and miscellaneous instruments are described. Also, information sheets on calibration sources, procedures, and devices are included. Gamma sources, beta sources, alpha sources, neutron sources, special sources, a gamma calibration device for badge dosimeters, and a calibration device for ionization chambers are described
Astronomical Instruments in India
Sarma, Sreeramula Rajeswara
The earliest astronomical instruments used in India were the gnomon and the water clock. In the early seventh century, Brahmagupta described ten types of instruments, which were adopted by all subsequent writers with minor modifications. Contact with Islamic astronomy in the second millennium AD led to a radical change. Sanskrit texts began to lay emphasis on the importance of observational instruments. Exclusive texts on instruments were composed. Islamic instruments like the astrolabe were adopted and some new types of instruments were developed. Production and use of these traditional instruments continued, along with the cultivation of traditional astronomy, up to the end of the nineteenth century.
Directory of Open Access Journals (Sweden)
Rumana Aslam
2017-07-01
Full Text Available In the present investigation healthy and certified seeds of Capsicum annuum were treated with five concentrations of caffeine i.e. 0.10%, 0.25%, 0.50%, 0.75% and 1.0%. Germination percentage, plants survival and pollen fertility were decreased with the increase of caffeine concentrations. Similarly root length and shoot length were decreased as the concentrations increased in M1 generation. Different mutants were isolated in M1 generation. In M2 generation, various flower mutants with changes in number of sepals, petals, anther size colour i.e. Trimerous, tetramerous, pentamerous with fused petals, hexamerous etc were segregated. Heptamerous and anther change was not observed in lower concentration viz. 0.1%. All these mutants showed significant changes in morphological characters and good breeding values at lower and intermediate concentrations. Mutagenic effectiveness and efficiency was observed on the basis of M2 flower mutant frequency. It was generally decreased with the increase of mutagen concentrations. Cytological aberrations in mutants showed the decreasing trend at meiotic final stages. These mutants were further analysed through RAPD method and on the basis of appearance of polymorphic DNA bands, they distinguished these flower mutants genotypically. Among 93 bands 44 bands were polymorphic which showed great genetic variation produced by caffeine. As an outcome of that the above caffeine concentrations are good for the induction of genetic variability in Capsicum genotype.
International Nuclear Information System (INIS)
Schmit, T.M.; Luo, J.; Conrad, J.M.
2011-01-01
U.S. ethanol policies have contributed to changes in the levels and the volatilities of revenues and costs facing ethanol firms. The implications of these policies for optimal investment behavior are investigated through an extension of the real options framework that allows for the consideration of volatility in both revenue and cost components, as well as the correlation between them. The effects of policy affecting plant revenues dominate the effects of those policies affecting production costs. In the absence of these policies, much of the recent expansionary periods would have not existed and market conditions in the late-1990s would have led to some plant closures. We also show that, regardless of plant size, U.S. ethanol policy has narrowed the distance between the optimal entry and exit curves, implying a more narrow range of inactivity and indicative of a more volatile evolution for the industry than would have existed otherwise. - Highlights: ► An extended real options framework with two stochastic variables is developed. ► Ethanol expansion largely induced by the revenue-enhancing effects of policy. ► Removing effects of policy changes optimal entry/exit environment considerably. ► To expand US ethanol industry, size of policy contributions needs to grow. ► US ethanol policy has fostered more volatile industry development.
International Nuclear Information System (INIS)
Buscheck, T.A.; Nitao, J.J.
1988-01-01
A central issue to be addressed within the Nevada Nuclear Waste Storage Investigations (NNWSI) is the role which fractures will play as the variably saturated, fractured rock mass surrounding the waste package responds to heating, cooling, and episodic infiltration events. Understanding the role of fractures during such events will, in part, depend on our ability to make geophysical measurements of perturbations in the moisture distribution in the vicinity of fractures. In this study we first examine the details of the perturbation in the moisture distribution in and around a fracture subjected to an episodic infiltration event, and then integrate that behavior over the scale at which moisture measurements are likely to be made during the Engineered Barrier Design Test of the NNWSI project. To model this system we use the TOUGH hydrothermal code and fracture and matrix properties considered relevant to the welded ash flow tuff found in the Topopah Spring member at Yucca Mountain as well as in the Grouse Canyon member within G-Tunnel at the Nevada Test Site. Our calculations provide insight into the anticipated spatial and temporal resolution obtainable through the use of the geophysical techniques being considered. These calculations should prove useful both in planning the implementation of these methods as well as in the interpretation of their results. 41 refs., 28 figs
Granato, Gregory E.
2006-01-01
The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and
Troubleshooting in nuclear instruments
International Nuclear Information System (INIS)
1987-06-01
This report on troubleshooting of nuclear instruments is the product of several scientists and engineers, who are closely associated with nuclear instrumentation and with the IAEA activities in the field. The text covers the following topics: Preamplifiers, amplifiers, scalers, timers, ratemeters, multichannel analyzers, dedicated instruments, tools, instruments, accessories, components, skills, interfaces, power supplies, preventive maintenance, troubleshooting in systems, radiation detectors. The troubleshooting and repair of instruments is illustrated by some real examples
Engelhardt, Benjamin; Kschischo, Maik; Fröhlich, Holger
2017-06-01
Ordinary differential equations (ODEs) are a popular approach to quantitatively model molecular networks based on biological knowledge. However, such knowledge is typically restricted. Wrongly modelled biological mechanisms as well as relevant external influence factors that are not included into the model are likely to manifest in major discrepancies between model predictions and experimental data. Finding the exact reasons for such observed discrepancies can be quite challenging in practice. In order to address this issue, we suggest a Bayesian approach to estimate hidden influences in ODE-based models. The method can distinguish between exogenous and endogenous hidden influences. Thus, we can detect wrongly specified as well as missed molecular interactions in the model. We demonstrate the performance of our Bayesian dynamic elastic-net with several ordinary differential equation models from the literature, such as human JAK-STAT signalling, information processing at the erythropoietin receptor, isomerization of liquid α -Pinene, G protein cycling in yeast and UV-B triggered signalling in plants. Moreover, we investigate a set of commonly known network motifs and a gene-regulatory network. Altogether our method supports the modeller in an algorithmic manner to identify possible sources of errors in ODE-based models on the basis of experimental data. © 2017 The Author(s).
Is it feasible to estimate radiosonde biases from interlaced measurements?
Kremser, Stefanie; Tradowsky, Jordis S.; Rust, Henning W.; Bodeker, Greg E.
2018-05-01
Upper-air measurements of essential climate variables (ECVs), such as temperature, are crucial for climate monitoring and climate change detection. Because of the internal variability of the climate system, many decades of measurements are typically required to robustly detect any trend in the climate data record. It is imperative for the records to be temporally homogeneous over many decades to confidently estimate any trend. Historically, records of upper-air measurements were primarily made for short-term weather forecasts and as such are seldom suitable for studying long-term climate change as they lack the required continuity and homogeneity. Recognizing this, the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) has been established to provide reference-quality measurements of climate variables, such as temperature, pressure, and humidity, together with well-characterized and traceable estimates of the measurement uncertainty. To ensure that GRUAN data products are suitable to detect climate change, a scientifically robust instrument replacement strategy must always be adopted whenever there is a change in instrumentation. By fully characterizing any systematic differences between the old and new measurement system a temporally homogeneous data series can be created. One strategy is to operate both the old and new instruments in tandem for some overlap period to characterize any inter-instrument biases. However, this strategy can be prohibitively expensive at measurement sites operated by national weather services or research institutes. An alternative strategy that has been proposed is to alternate between the old and new instruments, so-called interlacing, and then statistically derive the systematic biases between the two instruments. Here we investigate the feasibility of such an approach specifically for radiosondes, i.e. flying the old and new instruments on alternating days. Synthetic data sets are used to explore the
Directory of Open Access Journals (Sweden)
Stanisław Zajączkowski
Full Text Available It has long been suggested that reactive oxygen species (ROS play a role in oxygen sensing via peripheral chemoreceptors, which would imply their involvement in chemoreflex activation and autonomic regulation of heart rate. We hypothesize that antioxidant affect neurogenic cardiovascular regulation through activation of chemoreflex which results in increased control of sympathetic mechanism regulating heart rhythm. Activity of xanthine oxidase (XO, which is among the major endogenous sources of ROS in the rat has been shown to increase during hypoxia promote oxidative stress. However, the mechanism of how XO inhibition affects neurogenic regulation of heart rhythm is still unclear.The study aimed to evaluate effects of allopurinol-driven inhibition of XO on autonomic heart regulation in rats exposed to hypoxia followed by hyperoxia, using heart rate variability (HRV analysis.16 conscious male Wistar rats (350 g: control-untreated (N = 8 and pretreated with Allopurinol-XO inhibitor (5 mg/kg, followed by 50 mg/kg, administered intraperitoneally (N = 8, were exposed to controlled hypobaric hypoxia (1h in order to activate chemoreflex. The treatment was followed by 1h hyperoxia (chemoreflex suppression. Time-series of 1024 RR-intervals were extracted from 4kHz ECG recording for heart rate variability (HRV analysis in order to calculate the following time-domain parameters: mean RR interval (RRi, SDNN (standard deviation of all normal NN intervals, rMSSD (square root of the mean of the squares of differences between adjacent NN intervals, frequency-domain parameters (FFT method: TSP (total spectral power as well as low and high frequency band powers (LF and HF. At the end of experiment we used rat plasma to evaluate enzymatic activity of XO and markers of oxidative stress: protein carbonyl group and 8-isoprostane concentrations. Enzymatic activity of superoxide dismutase (SOD, catalase (CAT and glutathione peroxidase (GPx were measures in erythrocyte
Ziółkowski, Wiesław; Badtke, Piotr; Zajączkowski, Miłosz A.; Flis, Damian J.; Figarski, Adam; Smolińska-Bylańska, Maria; Wierzba, Tomasz H.
2018-01-01
Background It has long been suggested that reactive oxygen species (ROS) play a role in oxygen sensing via peripheral chemoreceptors, which would imply their involvement in chemoreflex activation and autonomic regulation of heart rate. We hypothesize that antioxidant affect neurogenic cardiovascular regulation through activation of chemoreflex which results in increased control of sympathetic mechanism regulating heart rhythm. Activity of xanthine oxidase (XO), which is among the major endogenous sources of ROS in the rat has been shown to increase during hypoxia promote oxidative stress. However, the mechanism of how XO inhibition affects neurogenic regulation of heart rhythm is still unclear. Aim The study aimed to evaluate effects of allopurinol-driven inhibition of XO on autonomic heart regulation in rats exposed to hypoxia followed by hyperoxia, using heart rate variability (HRV) analysis. Material and methods 16 conscious male Wistar rats (350 g): control-untreated (N = 8) and pretreated with Allopurinol-XO inhibitor (5 mg/kg, followed by 50 mg/kg), administered intraperitoneally (N = 8), were exposed to controlled hypobaric hypoxia (1h) in order to activate chemoreflex. The treatment was followed by 1h hyperoxia (chemoreflex suppression). Time-series of 1024 RR-intervals were extracted from 4kHz ECG recording for heart rate variability (HRV) analysis in order to calculate the following time-domain parameters: mean RR interval (RRi), SDNN (standard deviation of all normal NN intervals), rMSSD (square root of the mean of the squares of differences between adjacent NN intervals), frequency-domain parameters (FFT method): TSP (total spectral power) as well as low and high frequency band powers (LF and HF). At the end of experiment we used rat plasma to evaluate enzymatic activity of XO and markers of oxidative stress: protein carbonyl group and 8-isoprostane concentrations. Enzymatic activity of superoxide dismutase (SOD), catalase (CAT) and glutathione
Performing the Super Instrument
DEFF Research Database (Denmark)
Kallionpaa, Maria
2016-01-01
can empower performers by producing super instrument works that allow the concert instrument to become an ensemble controlled by a single player. The existing instrumental skills of the performer can be multiplied and the qualities of regular acoustic instruments extended or modified. Such a situation......The genre of contemporary classical music has seen significant innovation and research related to new super, hyper, and hybrid instruments, which opens up a vast palette of expressive potential. An increasing number of composers, performers, instrument designers, engineers, and computer programmers...... have become interested in different ways of “supersizing” acoustic instruments in order to open up previously-unheard instrumental sounds. Super instruments vary a great deal but each has a transformative effect on the identity and performance practice of the performing musician. Furthermore, composers...
Energy Technology Data Exchange (ETDEWEB)
McKenna-Lawlor, S.M.P. (Saint Patrick' s Coll., Maynooth (Ireland)); Afonin, V.V.; Gringauz, K.I. (AN SSSR, Moscow (USSR). Space Research Inst.) (and others)
Twin telescope particle detector systems SLED-1 and SLED-2, with the capability of monitoring electron and ion fluxes within an energy range spanning approximately 30 keV to a few megaelectron volts, were individually launched on the two spacecraft (Phobos-2 and Phobos-1, respectively) of the Soviet Phobos Mission to Mars and its moons in July 1988. A short description of the SLED instrument and a preliminary account of representative solar-related particle enhancements recorded by SLED-1 and SLED-2 during the Cruise Phase, and by SLED-1 in the near Martian environment (within the interval 25 July 1988-26 March 1989) are presented. These observations were made while the interplanetary medium was in the course of changing over from solar minimum- to solar maximum-dominated conditions and examples are presented of events associated with each of these phenomenological states. (author).
Portable radiation instrumentation traceability of standards and measurements
International Nuclear Information System (INIS)
Wiserman, A.; Walke, M.
1995-01-01
Portable radiation measuring instruments are used to estimate and control doses for workers. Calibration of these instruments must be sufficiently accurate to ensure that administrative and legal dose limits are not likely to be exceeded due to measurement uncertainties. An instrument calibration and management program is established which permits measurements made with an instrument to be traced to a national standard. This paper describes the establishment and maintenance of calibration standards for gamma survey instruments and an instrument management program which achieves traceability of measurement for uniquely identified field instruments. (author)
Virtual Instrumentation in Biomedical Equipment
Directory of Open Access Journals (Sweden)
Tiago Faustino Andrade
2013-01-01
Full Text Available Nowadays, the assessment of body composition by estimating the percentage of body fat has a great impact in many fields such as nutrition, health, sports, chronic diseases and others. The main purpose for this work is the development of a virtual instrument that permits more effective assessment of body fat, automatic data processing, recording results and storage in a database, with high potential to conduct new studies, http://lipotool.com.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV...... application despite the large sample. Unit root tests based on the IV estimator have better finite sample properties in this context....
NASA Instrument Cost/Schedule Model
Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George
2011-01-01
NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.
Parameter Estimation of a Closed Loop Coupled Tank Time Varying System using Recursive Methods
International Nuclear Information System (INIS)
Basir, Siti Nora; Yussof, Hanafiah; Shamsuddin, Syamimi; Selamat, Hazlina; Zahari, Nur Ismarrubie
2013-01-01
This project investigates the direct identification of closed loop plant using discrete-time approach. The uses of Recursive Least Squares (RLS), Recursive Instrumental Variable (RIV) and Recursive Instrumental Variable with Centre-Of-Triangle (RIV + COT) in the parameter estimation of closed loop time varying system have been considered. The algorithms were applied in a coupled tank system that employs covariance resetting technique where the time of parameter changes occur is unknown. The performances of all the parameter estimation methods, RLS, RIV and RIV + COT were compared. The estimation of the system whose output was corrupted with white and coloured noises were investigated. Covariance resetting technique successfully executed when the parameters change. RIV + COT gives better estimates than RLS and RIV in terms of convergence and maximum overshoot
Latest NASA Instrument Cost Model (NICM): Version VI
Mrozinski, Joe; Habib-Agahi, Hamid; Fox, George; Ball, Gary
2014-01-01
The NASA Instrument Cost Model, NICM, is a suite of tools which allow for probabilistic cost estimation of NASA's space-flight instruments at both the system and subsystem level. NICM also includes the ability to perform cost by analogy as well as joint confidence level (JCL) analysis. The latest version of NICM, Version VI, was released in Spring 2014. This paper will focus on the new features released with NICM VI, which include: 1) The NICM-E cost estimating relationship, which is applicable for instruments flying on Explorer-like class missions; 2) The new cluster analysis ability which, alongside the results of the parametric cost estimation for the user's instrument, also provides a visualization of the user's instrument's similarity to previously flown instruments; and 3) includes new cost estimating relationships for in-situ instruments.
International Nuclear Information System (INIS)
Kinnamon, Daniel D; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L; Lipsitz, Stuart R
2010-01-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not
Instrument Modeling and Synthesis
Horner, Andrew B.; Beauchamp, James W.
During the 1970s and 1980s, before synthesizers based on direct sampling of musical sounds became popular, replicating musical instruments using frequency modulation (FM) or wavetable synthesis was one of the “holy grails” of music synthesis. Synthesizers such as the Yamaha DX7 allowed users great flexibility in mixing and matching sounds, but were notoriously difficult to coerce into producing sounds like those of a given instrument. Instrument design wizards practiced the mysteries of FM instrument design.