WorldWideScience

Sample records for model hbm-derived estimates

  1. The effect of safety education based on Health Belief Model (HBM on the workers practice of Borujen industrial town in using the personal protection respiratory equipments

    Directory of Open Access Journals (Sweden)

    A. Hasanzadeh

    2008-04-01

    Full Text Available Background and aims   Every year 50-158 million occupational diseases and job accidents occur in the world. Studies on the job injuries show that about 150000 injuries occur annually in  Iran. Unhealthy behaviors are important problems in public health. Education is one of the best ways to change unhealthy behaviors. Interventions based on model and theories have many  capacities for behavior change. Health Belief Model is one of the health education models that are  useful for behavior change. This research has been performed in order to assess the effect of health  education program based on health belief model (HBM to prevent occupational respiratory   diseases in workers.   Methods   Aquasi-experimental design was used for this interventional study, in which 88 of workers of Borujen industrial town participated, who were randomly assigned to experimental and control group. Data collecting tool were a self-administered questionnaire including 53 questions based on health belief model that was completed by the workers, in addition to the performance check list which was conducted by researcher via insensible controlling the workers' safety behaviour. Validity and reliability of the tools were examined prior to the study. Educational  intervention was conducted in the first stage following by the second data collection one month  later. The data of both experimental and control group were compared statistically before and  after the intervention.   Results   The results showed that the mean of the grade of all parts of health belief model  (HBM and performance mark of the workers about safety and use of personal respiratory  preventive equipment in experimental group after educational intervention compared to prior the  study and also compared to control group were significantly increased.   Conclusion   The results of this survey showed that by enhancement of health belief model (HBM components including

  2. [Monograph for 3-(4-methylbenzylidene)camphor (4-MBC)--HBM values for the sum of metabolites 3-(4-carboxybenzylidene)camphor (3-4CBC) and 3-(4-carboxybenzylidene)-6-hydroxycamphor (3-4 CBHC) in the urine of adults and children. Statement of the HBM Commission of the German Federal Environment Agency].

    Science.gov (United States)

    2016-01-01

    The substance 3-(4-methylbenzylidene)camphor (4-MBC, CAS-No. 36861-47-9 as well as 38102-62-4) is used as UV-filter in cosmetics, mainly in sunscreen lotions. National as well as European evaluations are available for the substance, especially from the Scientific Committee on Consumer Products (SCCP). The SCCP did not derive a TDI-value, but used for a MoS assessment a NOAEL of 25 mg/(kg bw · d) based on effects on the thyroid gland of rats in a subchronic study with oral administration. Newer studies, however, indicate lower NOAEL values, leading to tolerable daily intakes of 0,01 mg/kg bw. The HBM Commission established for the metabolite 3-(4-carboxybenzylidene)camphor (3-4CBC) HBM-I values of 0,09 mg/l urine for adults and 0,06 mg/l urine for children. HBM-I values for the metabolite 3-(4-carboxybenzylidene)-6-hydroxycamphor (3-4CBHC) were set at 0,38 mg/l urine for adults and 0,25 mg/l urine for children. The rounded HBM-I value for the sum of metabolites 3-4CBC und 3-4CBHC is accordingly 0,5 mg/l urine for adults and 0,3 mg/l urine for children.

  3. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  4. ESD full chip simulation: HBM and CDM requirements and simulation approach

    Directory of Open Access Journals (Sweden)

    E. Franell

    2008-05-01

    Full Text Available Verification of ESD safety on full chip level is a major challenge for IC design. Especially phenomena with their origin in the overall product setup are posing a hurdle on the way to ESD safe products. For stress according to the Charged Device Model (CDM, a stumbling stone for a simulation based analysis is the complex current distribution among a huge number of internal nodes leading to hardly predictable voltage drops inside the circuits.

    This paper describes an methodology for Human Body Model (HBM simulations with an improved ESD-failure coverage and a novel methodology to replace capacitive nodes within a resistive network by current sources for CDM simulation. This enables a highly efficient DC simulation clearly marking CDM relevant design weaknesses allowing for application of this software both during product development and for product verification.

  5. [Monograph on di-2-propylheptyl phthalate (DPHP) - human biomonitoring (HBM) values for the sum of metabolites oxo-mono-propylheptyl phthalate (oxo-MPHP) and hydroxy-mono-propylheptyl phthalate (OH MPHP) in adult and child urine. Opinion of the Commission "Human Biomonitoring" of the Federal Environment Agency, Germany].

    Science.gov (United States)

    2015-07-01

    1,2-benzenedicarboxylic acid, bis(2-propylheptyl)ester (bis(2-propylheptyl)phthalate, DPHP) is used as plasticizer for the manufacture of plastics, i.e. mainly polyvinylchloride (PVC). A subchronic feeding study with rats revealed a NOAEL (no observed adverse effect level) of 40 mg/(kg bw · d), which can be used as a point of departure (POD) for the derivation of an HBM-I value. Application of a total assessment factor of 200 leads to an estimation of 200 µg/kg bw as a tolerable daily intake of DPHP. On the basis of the results of metabolism studies with humans it is possible to calculate from the tolerable daily intake of DPHP to the tolerable concentration of specific metabolites in urine. Thus an HBM-I value of 1 mg/L morning urine for children and 1.5 mg/L morning urine for adults was derived for the sum of the oxidized monoesters oxo-MPHP and OH-MPHP, which were identified as robust and conclusive biomarkers for DPHP.

  6. [Substance monograph on bisphenol A (BPA) - reference and human biomonitoring (HBM) values for BPA in urine. Opinion of the Human Biomonitoring Commission of the German Federal Environment Agency (UBA)].

    Science.gov (United States)

    2012-09-01

    Bisphenol A (BPA) is used for the production of polycarbonates and synthetic resins. Many of the items that contain BPA, for example polycarbonate bottles and coated cans, are commodities from which BPA can migrate into food and drinks, resulting in ubiquitous exposure of the population. Numerous animal studies and in vitro tests have shown that BPA acts as an "endocrine disruptor". Because of the still incomplete understanding of the complex and contradictory effects of BPA at doses below the NOAEL, the toxicological significance of recent findings is uncertain. The German HBM Commission takes notice that the risk assessment is currently in flux and that in the EU and other countries precautionary bans on BPA have been introduced. In the light of the extensive and growing body of literature, the Commission does not see itself in a position to resolve this controversy, nor to answer the question of the relevance of observed effects of low BPA doses on human health. The Commission has derived reference values (RV95) and TDI-based HBM I values for total BPA in urine. The RV95 values are 30 μg/l for 3-5 year olds, 15 μg/l for 6-14 year olds, and 7 μg/l for 20-29 year olds. The HBM I value for children is 1.5 mg/l and 2.5 mg/l for adults, respectively. The Commission emphasizes that the HBM values will require immediate adjustment should the current TDI of 0.05 mg/kg bw/day be changed. For the practical application of HBM, the Commission recommends an assessment based on the RV95. Confirmed exceedance of the RV95 by repeat measurements should prompt a search for the possible source(s), following the ALARA principle.

  7. Harmonised human biomonitoring in Europe: Activities towards an EU HBM framework

    DEFF Research Database (Denmark)

    Joas, Reinhard; Casteleyn, Ludwine; Biot, Pierre

    2012-01-01

    , experts from authorities and other stakeholders joined forces to work towards developing a functional framework and standards for a coherent HBM in Europe. Within the European coordination action on human biomonitoring, 35 partners from 27 European countries in the COPHES consortium aggregated...... health concerns, and political and health priorities. The harmonised approach includes sampling recruitment, and analytical procedures, communication strategies and biobanking initiatives. The protocols and the harmonised approach are a means to increase acceptance and policy support and to in the future...

  8. Direct phase derivative estimation using difference equation modeling in holographic interferometry

    International Nuclear Information System (INIS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2014-01-01

    A new method is proposed for the direct phase derivative estimation from a single spatial frequency modulated carrier fringe pattern in holographic interferometry. The fringe intensity in a given row/column is modeled as a difference equation of intensity with spatially varying coefficients. These coefficients carry the information on the phase derivative. Consequently, the accurate estimation of the coefficients is obtained by approximating the coefficients as a linear combination of the predefined linearly independent basis functions. Unlike Fourier transform based fringe analysis, the method does not call for performing the filtering of the Fourier spectrum of fringe intensity. Moreover, the estimation of the carrier frequency is performed by applying the proposed method to a reference interferogram. The performance of the proposed method is insensitive to the fringe amplitude modulation and is validated with the simulation results. (paper)

  9. Error estimates for near-Real-Time Satellite Soil Moisture as Derived from the Land Parameter Retrieval Model

    NARCIS (Netherlands)

    Parinussa, R.M.; Meesters, A.G.C.A.; Liu, Y.Y.; Dorigo, W.; Wagner, W.; de Jeu, R.A.M.

    2011-01-01

    A time-efficient solution to estimate the error of satellite surface soil moisture from the land parameter retrieval model is presented. The errors are estimated using an analytical solution for soil moisture retrievals from this radiative-transfer-based model that derives soil moisture from

  10. Estimating Dynamical Systems: Derivative Estimation Hints from Sir Ronald A. Fisher

    Science.gov (United States)

    Deboeck, Pascal R.

    2010-01-01

    The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common…

  11. Estimation efficiency of usage satellite derived and modelled biophysical products for yield forecasting

    Science.gov (United States)

    Kolotii, Andrii; Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii; Ostapenko, Vadim; Oliinyk, Tamara

    2015-04-01

    Efficient and timely crop monitoring and yield forecasting are important tasks for ensuring of stability and sustainable economic development [1]. As winter crops pay prominent role in agriculture of Ukraine - the main focus of this study is concentrated on winter wheat. In our previous research [2, 3] it was shown that usage of biophysical parameters of crops such as FAPAR (derived from Geoland-2 portal as for SPOT Vegetation data) is far more efficient for crop yield forecasting to NDVI derived from MODIS data - for available data. In our current work efficiency of usage such biophysical parameters as LAI, FAPAR, FCOVER (derived from SPOT Vegetation and PROBA-V data at resolution of 1 km and simulated within WOFOST model) and NDVI product (derived from MODIS) for winter wheat monitoring and yield forecasting is estimated. As the part of crop monitoring workflow (vegetation anomaly detection, vegetation indexes and products analysis) and yield forecasting SPIRITS tool developed by JRC is used. Statistics extraction is done for landcover maps created in SRI within FP-7 SIGMA project. Efficiency of usage satellite based and modelled with WOFOST model biophysical products is estimated. [1] N. Kussul, S. Skakun, A. Shelestov, O. Kussul, "Sensor Web approach to Flood Monitoring and Risk Assessment", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 815-818. [2] F. Kogan, N. Kussul, T. Adamenko, S. Skakun, O. Kravchenko, O. Kryvobok, A. Shelestov, A. Kolotii, O. Kussul, and A. Lavrenyuk, "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," International Journal of Applied Earth Observation and Geoinformation, vol. 23, pp. 192-203, 2013. [3] Kussul O., Kussul N., Skakun S., Kravchenko O., Shelestov A., Kolotii A, "Assessment of relative efficiency of using MODIS data to winter wheat yield forecasting in Ukraine", in: IGARSS 2013, 21-26 July 2013, Melbourne, Australia, pp. 3235 - 3238.

  12. Estimating Derived Response Levels at the Savannah River Site for Use with Emergency Response Models

    International Nuclear Information System (INIS)

    Simpkins, A.A.

    2002-01-01

    Emergency response computer models at the Savannah River Site (SRS) are coupled with real-time meteorological data to estimate dose to individuals downwind of accidental radioactive releases. Currently, these models estimate doses for inhalation and shine pathways, but do not consider dose due to ingestion of contaminated food products. The Food and Drug Administration (FDA) has developed derived intervention levels (DIL) which refer to the radionuclide-specific concentration in food present throughout the relevant period of time, with no intervention, that could lead to an individual receiving a radiation dose equal to the protective action guide. In the event of an emergency, concentrations in various food types are compared with these levels to make interdictions decisions. Prior to monitoring results being available, concentrations in the environmental media (i.e. soil), called derived response levels (DRLs), can be estimated from the DILs and directly compared with computer output to provide preliminary guidance as to whether intervention is necessary. Site-specific derived response levels (DRLs) are developed for ingestion pathways pertinent to SRS: milk, meat, fish, grain, produce, and beverage. This provides decision-makers with an additional tool for use immediately following an accident prior to the acquisition of food monitoring data

  13. Estimating the effect of lay knowledge and prior contact with pulmonary TB patients, on health-belief model in a high-risk pulmonary TB transmission population.

    Science.gov (United States)

    Zein, Rizqy Amelia; Suhariadi, Fendy; Hendriani, Wiwin

    2017-01-01

    The research aimed to investigate the effect of lay knowledge of pulmonary tuberculosis (TB) and prior contact with pulmonary TB patients on a health-belief model (HBM) as well as to identify the social determinants that affect lay knowledge. Survey research design was conducted, where participants were required to fill in a questionnaire, which measured HBM and lay knowledge of pulmonary TB. Research participants were 500 residents of Semampir, Asemrowo, Bubutan, Pabean Cantian, and Simokerto districts, where the risk of pulmonary TB transmission is higher than other districts in Surabaya. Being a female, older in age, and having prior contact with pulmonary TB patients significantly increase the likelihood of having a higher level of lay knowledge. Lay knowledge is a substantial determinant to estimate belief in the effectiveness of health behavior and personal health threat. Prior contact with pulmonary TB patients is able to explain the belief in the effectiveness of a health behavior, yet fails to estimate participants' belief in the personal health threat. Health authorities should prioritize males and young people as their main target groups in a pulmonary TB awareness campaign. The campaign should be able to reconstruct people's misconception about pulmonary TB, thereby bringing around the health-risk perception so that it is not solely focused on improving lay knowledge.

  14. A systematic approach for designing a HBM Pilot Study for Europe

    DEFF Research Database (Denmark)

    Becker, Kerstin; Seiwert, Margarete; Casteleyn, Ludwine

    2014-01-01

    of a human biomonitoring study. Furthermore, it provides an example for a systematic approach that may be useful to other research groups or pan-European research initiatives. In the study protocol that will be published elsewhere these aspects are elaborated and additional aspects are covered (Casteleyn et...... biomonitoring (HBM) study and for the implementation of the fieldwork procedures. The approach gave the basis for discussion of the main aspects of study design and conduct, and provided a decision making tool which can be applied to many other studies. Each decision that had to be taken was listed in a table......The objective of COPHES (Consortium to Perform Human biomonitoring on a European Scale) was to develop a harmonised approach to conduct human biomonitoring on a European scale. COPHES developed a systematic approach for designing and conducting a pilot study for an EU-wide cross-sectional human...

  15. Assessment of Health Belief Model (HBM impact on knowledge, beliefs, and self-efficacy of women in need of genetic counseling

    Directory of Open Access Journals (Sweden)

    Mitra Moodi

    2016-09-01

    Full Text Available Background and Aim: Regarding the ever-increasing of genetic diseases, counseling for the prevention of these diseases has got overwhelming necessity. Thus, promoting individuals’ awareness of. genetic counseling is required. The current study aimed at determining  the effect of an educational program based on Health Belief Model on knowledge, beliefs, and self-efficacy of urbanized women in need of genetic counseling. Materials and Methods: In this randomized field trial study, 80 married women in need of genetic counseling were divided into two equal case and control groups. Data collection means were a researcher-designed questionnaire consisting of demographic data and health belief model queries, which were completed by interview. Educational intervention was done during three 90 minute sessions with one week interval between each one. Finally, the obtained data was fed into SPSS (version 16 applying the statistical tests of Chi-square, repeated ANOVA, independent t-test, Mann-Whitney and Friedman for analysis; and P0.05, but the difference became significant immediately and three months after intervention (P<0.001. There was a significant difference between the knowledge, threat, perceived benefits, barriers and self-efficacy in the two groups three week intervals before and  immediately after intervention, before and after the three months, immediately and after three months in the experimental group (P<0.001, but the difference was not significant in the control group. Conclusion: The results showed that educational interventions based on HBM increases women's knowledge, beliefs, and self-efficacy regarding the role of genetic counseling in the prevention of congenital malformations.

  16. Testing five social-cognitive models to explain predictors of personal oral health behaviours and intention to improve them.

    Science.gov (United States)

    Dumitrescu, Alexandrina L; Dogaru, Beatrice C; Duta, Carmen; Manolescu, Bogdan N

    2014-01-01

    To test the ability of several social-cognitive models to explain current behaviour and to predict intentions to engage in three different health behaviours (toothbrushing, flossing and mouthrinsing). Constructs from the health belief model (HBM), theory of reasoned action (TRA), theory of planned behaviour (TPB) and the motivational process of the health action process approach (HAPA) were measured simultaneously in an undergraduate student sample of 172 first-year medical students. Regarding toothbrushing, the TRA, TPB, HBM (without the inclusion of self-efficacy SE), HBM+SE and HAPA predictor models explained 7.4%, 22.7%, 10%, 10.2% and 10.1%, respectively, of the variance in behaviour and 7.5%, 25.6%, 12.1%, 17.5% and 17.2%, respectively, in intention. Regarding dental flossing, the TRA, TPB, HBM, HBM+SE and HAPA predictor models explained 39%, 50.6, 24.1%, 25.4% and 27.7%, respectively, of the variance in behaviour and 39.4%, 52.7%, 33.7%, 35.9% and 43.2%, respectively, in intention. Regarding mouthrinsing, the TRA, TPB, HBM, HBM+SE and HAPA predictor models explained 43.9%, 45.1%, 20%, 29% and 36%, respectively, of the variance in behaviour and 58%, 59.3%, 49.2%, 59.8% and 66.2%, respectively, in intention. The individual significant predictors for current behaviour were attitudes, barriers and outcome expectancy. Our findings revealed that the theory of planned behaviours and the health action process approach were the best predictor of intentions to engage in both behaviours.

  17. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  18. Improving spatio-temporal model estimation of satellite-derived PM2.5 concentrations: Implications for public health

    Science.gov (United States)

    Barik, M. G.; Al-Hamdan, M. Z.; Crosson, W. L.; Yang, C. A.; Coffield, S. R.

    2017-12-01

    Satellite-derived environmental data, available in a range of spatio-temporal scales, are contributing to the growing use of health impact assessments of air pollution in the public health sector. Models developed using correlation of Moderate Resolution Imaging Spectrometer (MODIS) Aerosol Optical Depth (AOD) with ground measurements of fine particulate matter less than 2.5 microns (PM2.5) are widely applied to measure PM2.5 spatial and temporal variability. In the public health sector, associations of PM2.5 with respiratory and cardiovascular diseases are often investigated to quantify air quality impacts on these health concerns. In order to improve predictability of PM2.5 estimation using correlation models, we have included meteorological variables, higher-resolution AOD products and instantaneous PM2.5 observations into statistical estimation models. Our results showed that incorporation of high-resolution (1-km) Multi-Angle Implementation of Atmospheric Correction (MAIAC)-generated MODIS AOD, meteorological variables and instantaneous PM2.5 observations improved model performance in various parts of California (CA), USA, where single variable AOD-based models showed relatively weak performance. In this study, we further asked whether these improved models actually would be more successful for exploring associations of public health outcomes with estimated PM2.5. To answer this question, we geospatially investigated model-estimated PM2.5's relationship with respiratory and cardiovascular diseases such as asthma, high blood pressure, coronary heart disease, heart attack and stroke in CA using health data from the Centers for Disease Control and Prevention (CDC)'s Wide-ranging Online Data for Epidemiologic Research (WONDER) and the Behavioral Risk Factor Surveillance System (BRFSS). PM2.5 estimation from these improved models have the potential to improve our understanding of associations between public health concerns and air quality.

  19. Estimating Biomass of Barley Using Crop Surface Models (CSMs Derived from UAV-Based RGB Imaging

    Directory of Open Access Journals (Sweden)

    Juliane Bendig

    2014-10-01

    Full Text Available Crop monitoring is important in precision agriculture. Estimating above-ground biomass helps to monitor crop vitality and to predict yield. In this study, we estimated fresh and dry biomass on a summer barley test site with 18 cultivars and two nitrogen (N-treatments using the plant height (PH from crop surface models (CSMs. The super-high resolution, multi-temporal (1 cm/pixel CSMs were derived from red, green, blue (RGB images captured from a small unmanned aerial vehicle (UAV. Comparison with PH reference measurements yielded an R2 of 0.92. The test site with different cultivars and treatments was monitored during “Biologische Bundesanstalt, Bundessortenamt und CHemische Industrie” (BBCH Stages 24–89. A high correlation was found between PH from CSMs and fresh biomass (R2 = 0.81 and dry biomass (R2 = 0.82. Five models for above-ground fresh and dry biomass estimation were tested by cross-validation. Modelling biomass between different N-treatments for fresh biomass produced the best results (R2 = 0.71. The main limitation was the influence of lodging cultivars in the later growth stages, producing irregular plant heights. The method has potential for future application by non-professionals, i.e., farmers.

  20. Factors influencing pursuit of hearing evaluation: Enhancing the health belief model with perceived burden from hearing loss on communication partners.

    Science.gov (United States)

    Schulz, Kristine A; Modeste, Naomi; Lee, Jerry; Roberts, Rhonda; Saunders, Gabrielle H; Witsell, David L

    2016-07-01

    There is limited application of health behavior-based theoretical models in hearing healthcare, yet other fields utilizing these models have shown their value in affecting behavior change. The health belief model (HBM) has demonstrated appropriateness for hearing research. This study assessed factors that influence an individual with suspected hearing loss to pursue clinical evaluation, with a focus on perceived burden of hearing loss on communication partners, using the HBM as a framework. Cross-sectional design collecting demographics along with three validated hearing-loss related questionnaires. Patients from Duke University Medical Center Otolaryngology Clinic aged 55-75 years who indicated a communication partner had expressed concern about their hearing. A final sample of 413 completed questionnaire sets was achieved. The HBM model construct 'cues to action' was a significant (p loss on communication partners was a significant (p model fit when added to the HBM: 72.0% correct prediction when burden is added versus 66.6% when not (p models in hearing healthcare is warranted.

  1. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  2. Spatial Topography of Individual-Specific Cortical Networks Predicts Human Cognition, Personality, and Emotion.

    Science.gov (United States)

    Kong, Ru; Li, Jingwei; Orban, Csaba; Sabuncu, Mert R; Liu, Hesheng; Schaefer, Alexander; Sun, Nanbo; Zuo, Xi-Nian; Holmes, Avram J; Eickhoff, Simon B; Yeo, B T Thomas

    2018-06-06

    Resting-state functional magnetic resonance imaging (rs-fMRI) offers the opportunity to delineate individual-specific brain networks. A major question is whether individual-specific network topography (i.e., location and spatial arrangement) is behaviorally relevant. Here, we propose a multi-session hierarchical Bayesian model (MS-HBM) for estimating individual-specific cortical networks and investigate whether individual-specific network topography can predict human behavior. The multiple layers of the MS-HBM explicitly differentiate intra-subject (within-subject) from inter-subject (between-subject) network variability. By ignoring intra-subject variability, previous network mappings might confuse intra-subject variability for inter-subject differences. Compared with other approaches, MS-HBM parcellations generalized better to new rs-fMRI and task-fMRI data from the same subjects. More specifically, MS-HBM parcellations estimated from a single rs-fMRI session (10 min) showed comparable generalizability as parcellations estimated by 2 state-of-the-art methods using 5 sessions (50 min). We also showed that behavioral phenotypes across cognition, personality, and emotion could be predicted by individual-specific network topography with modest accuracy, comparable to previous reports predicting phenotypes based on connectivity strength. Network topography estimated by MS-HBM was more effective for behavioral prediction than network size, as well as network topography estimated by other parcellation approaches. Thus, similar to connectivity strength, individual-specific network topography might also serve as a fingerprint of human behavior.

  3. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  4. Markov models for digraph panel data : Monte Carlo-based derivative estimation

    NARCIS (Netherlands)

    Schweinberger, Michael; Snijders, Tom A. B.

    2007-01-01

    A parametric, continuous-time Markov model for digraph panel data is considered. The parameter is estimated by the method of moments. A convenient method for estimating the variance-covariance matrix of the moment estimator relies on the delta method, requiring the Jacobian matrix-that is, the

  5. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  6. Differences among estimates of critical power and anaerobic work capacity derived from five mathematical models and the three-minute all-out test.

    Science.gov (United States)

    Bergstrom, Haley C; Housh, Terry J; Zuniga, Jorge M; Traylor, Daniel A; Lewis, Robert W; Camic, Clayton L; Schmidt, Richard J; Johnson, Glen O

    2014-03-01

    Estimates of critical power (CP) and anaerobic work capacity (AWC) from the power output vs. time relationship have been derived from various mathematical models. The purpose of this study was to examine estimates of CP and AWC from the multiple work bout, 2- and 3-parameter models, and those from the 3-minute all-out CP (CP3min) test. Nine college-aged subjects performed a maximal incremental test to determine the peak oxygen consumption rate and the gas exchange threshold. On separate days, each subject completed 4 randomly ordered constant power output rides to exhaustion to estimate CP and AWC from 5 regression models (2 linear, 2 nonlinear, and 1 exponential). During the final visit, CP and AWC were estimated from the CP3min test. The nonlinear 3-parameter (Nonlinear-3) model produced the lowest estimate of CP. The exponential (EXP) model and the CP3min test were not statistically different and produced the highest estimates of CP. Critical power estimated from the Nonlinear-3 model was 14% less than those from the EXP model and the CP3min test and 4-6% less than those from the linear models. Furthermore, the Nonlinear-3 and nonlinear 2-parameter (Nonlinear-2) models produced significantly greater estimates of AWC than did the linear models and CP3min. The current findings suggested that the Nonlinear-3 model may provide estimates of CP and AWC that more accurately reflect the asymptote of the power output vs. time relationship, the demarcation of the heavy and severe exercise intensity domains, and anaerobic capabilities than will the linear models and CP3min test.

  7. Simultaneous inference for model averaging of derived parameters

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian

    2015-01-01

    Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...

  8. Utilization of BIA-Derived Bone Mineral Estimates Exerts Minimal Impact on Body Fat Estimates via Multicompartment Models in Physically Active Adults.

    Science.gov (United States)

    Nickerson, Brett S; Tinsley, Grant M

    2018-03-21

    The purpose of this study was to compare body fat estimates and fat-free mass (FFM) characteristics produced by multicompartment models when utilizing either dual energy X-ray absorptiometry (DXA) or single-frequency bioelectrical impedance analysis (SF-BIA) for bone mineral content (BMC) in a sample of physically active adults. Body fat percentage (BF%) was estimated with 5-compartment (5C), 4-compartment (4C), 3-compartment (3C), and 2-compartment (2C) models, and DXA. The 5C-Wang with DXA for BMC (i.e., 5C-Wang DXA ) was the criterion. 5C-Wang using SF-BIA for BMC (i.e., 5C-Wang BIA ), 4C-Wang DXA (DXA for BMC), 4C-Wang BIA (BIA for BMC), and 3C-Siri all produced values similar to 5C-Wang DXA (r > 0.99; total error [TE] FFM characteristics (i.e., FFM density, water/FFM, mineral/FFM, and protein/FFM) for 5C-Wang DXA and 5C-Wang BIA were each compared with the "reference body" cadavers of Brozek et al. 5C-Wang BIA FFM density differed significantly from the "reference body" in women (1.103 ± 0.007 g/cm 3 ; p FFM and mineral/FFM were significantly lower in men and women when comparing 5C-Wang DXA and 5C-Wang BIA with the "reference body," whereas protein/FFM was significantly higher (all p ≤ 0.001). 3C-Lohman BIA and 3C-Lohman DXA produced error similar to 2C models and DXA and are therefore not recommended multicompartment models. Although more advanced multicompartment models (e.g., 4C-Wang and 5C-Wang) can utilize BIA-derived BMC with minimal impact on body fat estimates, the increased accuracy of these models over 3C-Siri is minimal. Copyright © 2018 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  9. Psychological models for development of motorcycle helmet use among students in Vietnam

    Science.gov (United States)

    Kumphong, J.; Satiennam, T.; Satiennam, W.; Trinh, Tu Anh

    2018-04-01

    A helmet can reduce head accident severity. The aim of this research study was to study the intention for helmet use of students who ride motorcycles in Vietnam, by Structural Equation Modeling (SEM). Questionnaires developed by several traffic psychology modules, including the Theory of Planned Behaviour (TPB), Traffic Locus of Control (T-LOC), and Health Belief Model (HBM), were distributed to students at Ton Thang University and University of Architecture, Ho Chi Minh City. SEM was used to explain helmet use behaviour. The results indicate that TPB, T-LOC and HBM could explain the variance in helmet use behaviour. However, TPB can explain behaviour (helmet use intention) better than T-LOC and HBM. The outcome of this study is useful for the agencies responsible to improve motorcycle safety.

  10. Effects of a Nutrition Education Intervention Designed based on the Health Belief Model (HBM on Reducing the Consumption of Unhealthy Snacks in the Sixth Grade Primary School Girls

    Directory of Open Access Journals (Sweden)

    Azam Fathi

    2017-02-01

    Full Text Available BackgroundMalnutrition can threaten mental and physical development of children while healthy nutrition can improve mental and physical status of children. To select the best foods, children need nutrition education. This study aimed to determine the effect of nutrition education on reducing the consumption of unhealthy snacks in female primary school students in Qom- Iran.Materials and MethodsThis interventional study was conducted on 88 students in intervention and control groups who were selected via multistage random sampling method. The data was collected using a valid and reliable researcher-made questionnaire which was designed based on the health belief model (HBM. First four training sessions were held for the intervention group; two months later, data were collected again from both groups of students (intervention and control group. The collected data were analyzed by SPSS version 16.0 using descriptive statistics and independent and paired t-test.ResultsThe mean score of knowledge and performance of the intervention group, were 96.12 and 18.61 before the intervention which changed to 110.00 and 68.22 after the intervention. The results showed that before the intervention there was no statistically significant difference between the two groups in terms of mean scores of knowledge, and the constructs of the health belief model (P>0.05. After the intervention, the scores of all variables and the behavior of unhealthy snacks consumption were significantly increased in the intervention group (P

  11. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  12. Estimation of microbial protein supply in ruminants using urinary purine derivatives

    International Nuclear Information System (INIS)

    Makkar, H.P.S.; Chen, X.B.

    2004-01-01

    This publication presents various models, describing the quantitative excretion of purine derivatives in urine, developed for various breeds of cattle and for sheep, goat, camel and buffalo and their use for estimation of microbial protein supply in ruminant livestock. It also describes progress made over the last decade in analytical methods for determining purine derivatives, and a unique approach for estimating microbial protein supply using spot urine samples developed under the FAO/IAEA CRP. This approach of using spot urine samples dispenses with quantitative recovery of urine, enabling its use by field and extension workers for evaluation of the nutritional status of farm animals. Future areas of research are also highlighted in the book. This book is a good source of reference for research workers, students and extension workers alike

  13. Quantifying scaling effects on satellite-derived forest area estimates for the conterminous USA

    Science.gov (United States)

    Daolan Zheng; L.S. Heath; M.J. Ducey; J.E. Smith

    2009-01-01

    We quantified the scaling effects on forest area estimates for the conterminous USA using regression analysis and the National Land Cover Dataset 30m satellite-derived maps in 2001 and 1992. The original data were aggregated to: (1) broad cover types (forest vs. non-forest); and (2) coarser resolutions (1km and 10 km). Standard errors of the model estimates were 2.3%...

  14. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  15. New Fokker-Planck derivation of heavy gas models for neutron thermalization

    International Nuclear Information System (INIS)

    Larsen, E.W.; Williams, M.M.R.

    1990-01-01

    This paper is concerned with the derivation of new generalized heavy gas models for the infinite medium neutron energy spectrum equation. Our approach is general and can be used to derive improved Fokker-Planck approximations for other types of kinetic equations. In this paper we obtain two distinct heavy gas models, together with estimates for the corresponding errors. The models are shown in a special case to reduce to modified heavy gas models proposed earlier by Corngold (1962). The error estimates show that both of the new models should be more accurate than Corngold's modified heavy gas model, and that the first of the two new models should generally be more accurate than the second. (author)

  16. Canada's forest biomass resources: deriving estimates from Canada's forest inventory

    International Nuclear Information System (INIS)

    Penner, M.; Power, K.; Muhairwe, C.; Tellier, R.; Wang, Y.

    1997-01-01

    A biomass inventory for Canada was undertaken to address the data needs of carbon budget modelers, specifically to provide estimates of above-ground tree components and of non-merchantable trees in Canadian forests. The objective was to produce a national method for converting volume estimates to biomass that was standardized, repeatable across the country, efficient and well documented. Different conversion methods were used for low productivity forests (productivity class 1) and higher productivity forests (productivity class 2). The conversion factors were computed by constructing hypothetical stands for each site, age, species and province combination, and estimating the merchantable volume and all the above-ground biomass components from suitable published equations. This report documents the procedures for deriving the national biomass inventory, and provides illustrative examples of the results. 46 refs., 9 tabs., 5 figs

  17. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  18. The Effect of an Educational Intervention Program on the Adoption of Low Back Pain Preventive Behaviors in Nurses: An Application of the Health Belief Model.

    Science.gov (United States)

    Sharafkhani, Naser; Khorsandi, Mahboobeh; Shamsi, Mohsen; Ranjbaran, Mehdi

    2016-02-01

    Study Design Randomized controlled trial. Objective The purpose of this study was to identify the effect of a theory-based educational intervention program on the level of knowledge and Health Belief Model (HBM) constructs among nurses in terms of the adoption of preventive behaviors. Methods This pretest/posttest quasi-experimental study was conducted on 100 nurses who were recruited through the multistage sampling method. The nurses were randomly assigned to intervention and control groups. The participants were evaluated before and 3 months after the educational intervention. A multidimensional questionnaire was prepared based on the theoretical structures of the HBM to collect the data. Data analysis was performed using descriptive and inferential statistics. Results There was no significant difference in the mean values of HBM constructs prior to the intervention between the intervention and control groups. However, after the administration of the educational program, the mean scores of knowledge and HBM constructs significantly increased in the intervention group when compared with the control group (p educational intervention based on the HBM was effective in improving the nurses' scores of knowledge and HBM constructs; therefore, theory-based health educational strategies are suggested as an effective alternative to traditional educational interventions.

  19. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  20. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  1. Estimation of stature from sternum - Exploring the quadratic models.

    Science.gov (United States)

    Saraf, Ashish; Kanchan, Tanuj; Krishan, Kewal; Ateriya, Navneet; Setia, Puneet

    2018-04-14

    Identification of the dead is significant in examination of unknown, decomposed and mutilated human remains. Establishing the biological profile is the central issue in such a scenario, and stature estimation remains one of the important criteria in this regard. The present study was undertaken to estimate stature from different parts of the sternum. A sample of 100 sterna was obtained from individuals during the medicolegal autopsies. Length of the deceased and various measurements of the sternum were measured. Student's t-test was performed to find the sex differences in stature and sternal measurements included in the study. Correlation between stature and sternal measurements were analysed using Karl Pearson's correlation, and linear and quadratic regression models were derived. All the measurements were found to be significantly larger in males than females. Stature correlated best with the combined length of sternum, among males (R = 0.894), females (R = 0.859), and for the total sample (R = 0.891). The study showed that the models derived for stature estimation from combined length of sternum are likely to give the most accurate estimates of stature in forensic case work when compared to manubrium and mesosternum. Accuracy of stature estimation further increased with quadratic models derived for the mesosternum among males and combined length of sternum among males and females when compared to linear regression models. Future studies in different geographical locations and a larger sample size are proposed to confirm the study observations. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  3. Algorithm for Financial Derivatives Evaluation in a Generalized Multi-Heston Model

    Directory of Open Access Journals (Sweden)

    Dan Negura

    2013-02-01

    Full Text Available In this paper we show how could a financial derivative be estimated based on an assumed Multi-Heston model support.Keywords: Euler Maruyama discretization method, Monte Carlo simulation, Heston model, Double-Heston model, Multi-Heston model

  4. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  5. Improving prenatal care in pregnant women in Iranshahr, Iran: Applying Health Belief Model.

    Science.gov (United States)

    Izadirad, Hossien; Niknami, Shamsoddin; Zareban, Iraj; Hidarnia, Alireza

    2017-11-07

    To determine the effect of an education-based intervention on receiving adequate prenatal care. This randomized, controlled trial was conducted on 90 primiparous pregnant women, referred in Iranshahr, Iran for prenatal care (intervention = 45, control group = 45). The data were collected from February to June 2016 using a questionnaire developed based on the Health Belief Model (HBM). The intervention group received three intervention sessions during the second trimester of pregnancy, and 3 months after intervention, both groups completed a questionnaire. Data were analyzed using independent sample t-tests, chi-squared tests, paired t-test, Pearson and multivariate regression. Unlike the control group, in the intervention group's mean scores for knowledge, variables from the HBM model and frequency of prenatal care significantly differed from pre- to post-intervention (pre-intervention mean = 12.62 ± 2.63, post-intervention mean = 17.71 ± 1.56, (p ˂ 0.05). Self-efficacy was positively correlated with knowledge (r = 0.304, p = 0.02) and adequate prenatal care (r = 0.583, p ˂ 0.001). The constructs of the HBM explained 75% of the variance in frequency of prenatal care in multivariable models. Developing an educational program based on the HBM was effective in the adoptation of prenatal care. Additionally, considering social, economic, and educational follow-up while implementing these programs is recommended.

  6. Predicting human papillomavirus vaccine uptake in young adult women: Comparing the Health Belief Model and Theory of Planned Behavior

    Science.gov (United States)

    Gerend, Mary A.; Shepherd, Janet E.

    2012-01-01

    Background Although theories of health behavior have guided thousands of studies, relatively few studies have compared these theories against one another. Purpose The purpose of the current study was to compare two classic theories of health behavior—the Health Belief Model (HBM) and the Theory of Planned Behavior (TPB)—in their prediction of human papillomavirus (HPV) vaccination. Methods After watching a gain-framed, loss-framed, or control video, women (N=739) ages 18–26 completed a survey assessing HBM and TPB constructs. HPV vaccine uptake was assessed ten months later. Results Although the message framing intervention had no effect on vaccine uptake, support was observed for both the TPB and HBM. Nevertheless, the TPB consistently outperformed the HBM. Key predictors of uptake included subjective norms, self-efficacy, and vaccine cost. Conclusions Despite the observed advantage of the TPB, findings revealed considerable overlap between the two theories and highlighted the importance of proximal versus distal predictors of health behavior. PMID:22547155

  7. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  8. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  9. Sexual-Reproductive Health Belief Model of college students

    Directory of Open Access Journals (Sweden)

    Masoomeh Simbar

    2004-09-01

    Full Text Available Sexual- reproductive health of youth is one of the most unknown aspects of our community, while the world, including our country is faced with the risk of AIDS spreading. The aim of this study was to describe Health Belief Model (HBM of the students about sexual-reproductive health behaviors and evaluate the ability of the model in predicting related behaviors. By using quota sampling, 1117 male and female students of Qazvin Medical Science and International universities were included in the study in 1991. A self-completed questionnaire was prepared containing close questions based on HBM components including perceived threats (susceptibility and severity of related diseases, perceived reproductive benefits and barriers and self efficacy of youth about reproductive health. A total of 645 of participants were female and 457 were male (Mean age 21.4±2.4 and 22.7±3.5, respectively. The Health Belief Model of the students showed that they perceived a moderate threat for AIDS and venereal diseases and their health outcomes. Most of them perceived the benefits of reproductive health behaviors. They believed that the ability of youth in considering reproductive health is low or moderate. However, they noted to some barriers for spreading of reproductive health in youth including inadequacy of services. Boys felt a higher level of threat for acquiring the AIDS and venereal diseases in compare to girls, but girls had a higher knowledge about these diseases and their complications. The Health Belief Model of the students with premarital intercourse behavior was not significantly different with the students without this behavior (Mann-Withney, P<0.05. Female students and the students without the history of premarital intercourse had significantly more positive attitude towards abstinence, comparing to male students and students with the history of premarital intercourse, respectively (Mann-Withney, P<0.05. Seventy five percent of students believed in

  10. Biomonitoring of the mycotoxin Zearalenone: current state-of-the art and application to human exposure assessment.

    Science.gov (United States)

    Mally, Angela; Solfrizzo, Michele; Degen, Gisela H

    2016-06-01

    Zearalenone (ZEN), a mycotoxin with high estrogenic activity in vitro and in vivo, is a widespread food contaminant that is commonly detected in maize, wheat, barley, sorghum, rye and other grains. Human exposure estimates based on analytical data on ZEN occurrence in various food categories and food consumption data suggest that human exposure to ZEN and modified forms of ZEN may be close to or even exceed the tolerable daily intake (TDI) derived by the European Food Safety Authority (EFSA) for some consumer groups. Considering the inherent uncertainties in estimating dietary intake of ZEN that may lead to an under- or overestimation of ZEN exposure and consequently human risk and current lack of data on vulnerable consumer groups, there is a clear need for more comprehensive and reliable exposure data to refine ZEN risk assessment. Human biomonitoring (HBM) is increasingly being recognized as an efficient and cost-effective way of assessing human exposure to food contaminants, including mycotoxins. Based on animal and (limited) human data on the toxicokinetics of ZEN, it appears that excretion of ZEN and its major metabolites may present suitable biomarkers of ZEN exposure. In view of the limitations of available dietary exposure data on ZEN and its modified forms, the purpose of this review is to provide an overview of recent studies utilizing HBM to monitor and assess human exposure to ZEN. Considerations are given to animal and human toxicokinetic data relevant to HBM, analytical methods, and available HBM data on urinary biomarkers of ZEN exposure in different cohorts.

  11. Mucopolysaccharidosis enzyme production by bone marrow and dental pulp derived human mesenchymal stem cells.

    Science.gov (United States)

    Jackson, Matilda; Derrick Roberts, Ainslie; Martin, Ellenore; Rout-Pitt, Nathan; Gronthos, Stan; Byers, Sharon

    2015-04-01

    Mucopolysaccharidoses (MPS) are inherited metabolic disorders that arise from a complete loss or a reduction in one of eleven specific lysosomal enzymes. MPS children display pathology in multiple cell types leading to tissue and organ failure and early death. Mesenchymal stem cells (MSCs) give rise to many of the cell types affected in MPS, including those that are refractory to current treatment protocols such as hematopoietic stem cell (HSC) based therapy. In this study we compared multiple MPS enzyme production by bone marrow derived (hBM) and dental pulp derived (hDP) MSCs to enzyme production by HSCs. hBM MSCs produce significantly higher levels of MPS I, II, IIIA, IVA, VI and VII enzyme than HSCs, while hDP MSCs produce significantly higher levels of MPS I, IIIA, IVA, VI and VII enzymes. Higher transfection efficiency was observed in MSCs (89%) compared to HSCs (23%) using a lentiviral vector. Over-expression of four different lysosomal enzymes resulted in up to 9303-fold and up to 5559-fold greater levels in MSC cell layer and media respectively. Stable, persistent transduction of MSCs and sustained over-expression of MPS VII enzyme was observed in vitro. Transduction of MSCs did not affect the ability of the cells to differentiate down osteogenic, adipogenic or chondrogenic lineages, but did partially delay differentiation down the non-mesodermal neurogenic lineage. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  13. Estimation of rumen microbial-nitrogen of sheep using urinary excretion of purine derivatives

    International Nuclear Information System (INIS)

    Liu Dasen; Shan Anshan

    2004-01-01

    Determination of rumen microbial-nitrogen of sheep using urinary excretion of purine derivative was studied. Uric acid and xanthine + hypoxanthine were not affected by diets, but total purine derivatives for 1 mg borax/kg diet was higher than other diets (p<0.05). Microbial-nitrogen estimated from allantoin was not affected by diets, but that of 1 mg borax/kg diet estimated from total purine derivatives was higher than other diets (p<0.05). Microbial-nitrogen estimated from total purine derivatives was higher than that from allantoin

  14. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  15. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  16. Estimation and asymptotic theory for transition probabilities in Markov Renewal Multi–state models

    NARCIS (Netherlands)

    Spitoni, C.; Verduijn, M.; Putter, H.

    2012-01-01

    In this paper we discuss estimation of transition probabilities for semi–Markov multi–state models. Non–parametric and semi–parametric estimators of the transition probabilities for a large class of models (forward going models) are proposed. Large sample theory is derived using the functional

  17. Using the Health Belief Model to Explain Mothers? and Fathers? Intention to Participate in Universal Parenting Programs

    OpenAIRE

    Salari, Raziye; Filus, Ania

    2016-01-01

    Using the Health Belief Model (HBM) as a theoretical framework, we studied factors related to parental intention to participate in parenting programs and examined the moderating effects of parent gender on these factors. Participants were a community sample of 290 mothers and 290 fathers of 5- to 10-year-old children. Parents completed a set of questionnaires assessing child emotional and behavioral difficulties and the HBM constructs concerning perceived program benefits and barriers, percei...

  18. An estimation on the derived limits of effluent water concentration

    International Nuclear Information System (INIS)

    Okamura, Yasuharu; Kobayashi, Katuhiko; Kusama, Tomoko; Yoshizawa, Yasuo

    1984-01-01

    The values of Derived Limits of Effluent Water Concentration, (DLEC)sub(w), have been estimated in accordance with the principles of the recent recommendations of the International Commission on Radiological Protection. The (DLEC)sub(w)'s were derived from the Annual Limits on Intake for individual members of the public (ALIsub(p)), considering realistic models of exposure pathways and annual intake rates of foods. The ALIsub(p)'s were decided after consideration of body organ mass and other age dependent parameters. We assumed that the materials which brought exposure to the public were drinking water, fish, seaweed, invertebrate and seashore. The age dependence of annual intake rate of food might be proportional to a person's energy expenditure rate. The following results were obtained. Infants were the critical group of the public at the time of derivation of (DLEC)sub(w). The ALIsub(p)'s for the infants were about one-hundredth of those for workers and their (DLEC)sub(w)'s were about one-third of those for the adult members of the public. (author)

  19. Estimating daily climatologies for climate indices derived from climate model data and observations

    Science.gov (United States)

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  20. A Remote Sensing-Derived Corn Yield Assessment Model

    Science.gov (United States)

    Shrestha, Ranjay Man

    be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield

  1. Estimation of Airborne Lidar-Derived Tropical Forest Canopy Height Using Landsat Time Series in Cambodia

    Directory of Open Access Journals (Sweden)

    Tetsuji Ota

    2014-11-01

    Full Text Available In this study, we test and demonstrate the utility of disturbance and recovery information derived from annual Landsat time series to predict current forest vertical structure (as compared to the more common approaches, that consider a sample of airborne Lidar and single-date Landsat derived variables. Mean Canopy Height (MCH was estimated separately using single date, time series, and the combination of single date and time series variables in multiple regression and random forest (RF models. The combination of single date and time series variables, which integrate disturbance history over the entire time series, overall provided better MCH prediction than using either of the two sets of variables separately. In general, the RF models resulted in improved performance in all estimates over those using multiple regression. The lowest validation error was obtained using Landsat time series variables in a RF model (R2 = 0.75 and RMSE = 2.81 m. Combining single date and time series data was more effective when the RF model was used (opposed to multiple regression. The RMSE for RF mean canopy height prediction was reduced by 13.5% when combining the two sets of variables as compared to the 3.6% RMSE decline presented by multiple regression. This study demonstrates the value of airborne Lidar and long term Landsat observations to generate estimates of forest canopy height using the random forest algorithm.

  2. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  3. Estimation of Aboveground Biomass in Alpine Forests: A Semi-Empirical Approach Considering Canopy Transparency Derived from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Martin Rutzinger

    2010-12-01

    Full Text Available In this study, a semi-empirical model that was originally developed for stem volume estimation is used for aboveground biomass (AGB estimation of a spruce dominated alpine forest. The reference AGB of the available sample plots is calculated from forest inventory data by means of biomass expansion factors. Furthermore, the semi-empirical model is extended by three different canopy transparency parameters derived from airborne LiDAR data. These parameters have not been considered for stem volume estimation until now and are introduced in order to investigate the behavior of the model concerning AGB estimation. The developed additional input parameters are based on the assumption that transparency of vegetation can be measured by determining the penetration of the laser beams through the canopy. These parameters are calculated for every single point within the 3D point cloud in order to consider the varying properties of the vegetation in an appropriate way. Exploratory Data Analysis (EDA is performed to evaluate the influence of the additional LiDAR derived canopy transparency parameters for AGB estimation. The study is carried out in a 560 km2 alpine area in Austria, where reference forest inventory data and LiDAR data are available. The investigations show that the introduction of the canopy transparency parameters does not change the results significantly according to R2 (R2 = 0.70 to R2 = 0.71 in comparison to the results derived from, the semi-empirical model, which was originally developed for stem volume estimation.

  4. Estimating crop net primary production using inventory data and MODIS-derived parameters

    Energy Technology Data Exchange (ETDEWEB)

    Bandaru, Varaprasad; West, Tristram O.; Ricciuto, Daniel M.; Izaurralde, Roberto C.

    2013-06-03

    National estimates of spatially-resolved cropland net primary production (NPP) are needed for diagnostic and prognostic modeling of carbon sources, sinks, and net carbon flux. Cropland NPP estimates that correspond with existing cropland cover maps are needed to drive biogeochemical models at the local scale and over national and continental extents. Existing satellite-based NPP products tend to underestimate NPP on croplands. A new Agricultural Inventory-based Light Use Efficiency (AgI-LUE) framework was developed to estimate individual crop biophysical parameters for use in estimating crop-specific NPP. The method is documented here and evaluated for corn and soybean crops in Iowa and Illinois in years 2006 and 2007. The method includes a crop-specific enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS), shortwave radiation data estimated using Mountain Climate Simulator (MTCLIM) algorithm and crop-specific LUE per county. The combined aforementioned variables were used to generate spatially-resolved, crop-specific NPP that correspond to the Cropland Data Layer (CDL) land cover product. The modeling framework represented well the gradient of NPP across Iowa and Illinois, and also well represented the difference in NPP between years 2006 and 2007. Average corn and soybean NPP from AgI-LUE was 980 g C m-2 yr-1 and 420 g C m-2 yr-1, respectively. This was 2.4 and 1.1 times higher, respectively, for corn and soybean compared to the MOD17A3 NPP product. Estimated gross primary productivity (GPP) derived from AgI-LUE were in close agreement with eddy flux tower estimates. The combination of new inputs and improved datasets enabled the development of spatially explicit and reliable NPP estimates for individual crops over large regional extents.

  5. Measurement of the Constructs of Health Belief Model related to Self-care during Pregnancy in Women Referred to South Tehran Health Network

    Directory of Open Access Journals (Sweden)

    Yalda Soleiman Ekhtiari

    2016-03-01

    Full Text Available Background and Objective: Self-care activities during pregnancy can be effective in reducing adverse pregnancy outcomes. Health Belief Model (HBM is one of the most applicable models in educational need assessment for planning and implementation of educational interventions. The purpose of this study was to measurement of the constructs of HBM related to self-care during pregnancy in women referred to South Tehran health network.Materials and Methods: In this cross-sectional study 270 pregnant women who referred to health centers of South Tehran Health Networks participated. Demographic, knowledge and attitude questionnaires based on constructs of HBM was used to measure the status of knowledge and attitude of women. Data were analyzed using statistical software SPSS18.Results: Results showed that 92.2% of women had the knowledge scores in good level. The scores of perceived severity, perceived self-efficacy and cues to action were in good level in almost of women but almost of women obtained weak point in perceived susceptibility, perceived benefits and barriersConclusion: HBM can be used as an appropriate tool for assessment the status of pregnant women in the field of self-care behaviors during pregnancy and planning and implementation of educational interventions.

  6. Effect of health belief model and health promotion model on breast cancer early diagnosis behavior: a systematic review.

    Science.gov (United States)

    Ersin, Fatma; Bahar, Zuhal

    2011-01-01

    Breast cancer is an important public health problem on the grounds that it is frequently seen and it is a fatal disease. The objective of this systematic analysis is to indicate the effects of interventions performed by nurses by using the Health Belief Model (HBM) and Health Promotion Model (HPM) on the breast cancer early diagnosis behaviors and on the components of the Health Belief Model and Health Promotion Model. The reveiw was created in line with the Centre for Reviews and Dissemination guide dated 2009 (CRD) and developed by York University National Institute of Health Researches. Review was conducted by using PUBMED, OVID, EBSCO and COCHRANE databases. Six hundred seventy eight studies (PUBMED: 236, OVID: 162, EBSCO: 175, COCHRANE:105) were found in total at the end of the review. Abstracts and full texts of these six hundred seventy eight studies were evaluated in terms of inclusion and exclusion criteria and 9 studies were determined to meet the criteria. Samplings of the studies varied between ninety four and one thousand six hundred fifty five. It was detected in the studies that educations provided by taking the theories as basis became effective on the breast cancer early diagnosis behaviors. When the literature is examined, it is observed that the experimental researches which compare the concepts of Health Belief Model (HBM) and Health Promotion Model (HPM) preoperatively and postoperatively and show the effect of these concepts on education and are conducted by nurses are limited in number. Randomized controlled studies which compare HBM and HPM concepts preoperatively and postoperatively and show the efficiency of the interventions can be useful in evaluating the efficiency of the interventions.

  7. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  8. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  9. On estimation of the noise variance in high-dimensional linear models

    OpenAIRE

    Golubev, Yuri; Krymova, Ekaterina

    2017-01-01

    We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).

  10. Testing the sensitivity of terrestrial carbon models using remotely sensed biomass estimates

    Science.gov (United States)

    Hashimoto, H.; Saatchi, S. S.; Meyer, V.; Milesi, C.; Wang, W.; Ganguly, S.; Zhang, G.; Nemani, R. R.

    2010-12-01

    There is a large uncertainty in carbon allocation and biomass accumulation in forest ecosystems. With the recent availability of remotely sensed biomass estimates, we now can test some of the hypotheses commonly implemented in various ecosystem models. We used biomass estimates derived by integrating MODIS, GLAS and PALSAR data to verify above-ground biomass estimates simulated by a number of ecosystem models (CASA, BIOME-BGC, BEAMS, LPJ). This study extends the hierarchical framework (Wang et al., 2010) for diagnosing ecosystem models by incorporating independent estimates of biomass for testing and calibrating respiration, carbon allocation, turn-over algorithms or parameters.

  11. Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.

    2008-01-01

    In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.

  12. Contributions in Radio Channel Sounding, Modeling, and Estimation

    DEFF Research Database (Denmark)

    Pedersen, Troels

    2009-01-01

    This thesis spans over three strongly related topics in wireless communication: channel-sounding, -modeling, and -estimation. Three main problems are addressed: optimization of spatio-temporal apertures for channel sounding; estimation of per-path power spectral densities (psds); and modeling...... relies on a ``propagation graph'' where vertices  represent scatterers and edges represent the wave propagation conditions between scatterers.  The graph has a recursive structure, which permits modeling of the transfer function of the graph. We derive a closed-form expression of the infinite......-bounce impulse response. This expression is used for simulation of the impulse response of randomly generated propagation graphs. The obtained realizations exhibit the well-observed  exponential power decay versus delay and specular-to-diffuse transition....

  13. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  14. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  15. Application of the Health Belief Model to U.S. Magazine Text and Image Coverage of Skin Cancer and Recreational Tanning (2000-2012).

    Science.gov (United States)

    McWhirter, Jennifer E; Hoffman-Goetz, Laurie

    2016-01-01

    The health belief model (HBM) has been widely used to inform health education, social marketing, and health communication campaigns. Although the HBM can explain and predict an individual's willingness to engage in positive health behaviors, its application to, and penetration of the underlying constructs into, mass media content has not been well characterized. We examined 574 articles and 905 images about skin cancer and tanning risks, behaviors, and screening from 20 U.S. women's and men's magazines (2000-2012) for the presence of HBM constructs: perceived susceptibility, perceived severity, perceived benefits, perceived barriers, self-efficacy, and cues to action. Susceptibility (48.1%) and severity (60.3%) information was common in text. Perceived benefits (36.4%) and barriers (41.5%) to prevention of skin cancer were fairly equally mentioned in articles. Self-efficacy (48.4%) focused on sunscreen use. There was little emphasis on HBM constructs related to early detection. Few explicit cues to action about skin cancer appeared in text (12.0%) or images (0.1%). HBM constructs were present to a significantly greater extent in text versus images (e.g., severity, 60.3% vs. 11.3%, respectively, χ(2) = 399.51, p < .0001; benefits prevention, 36.4% vs. 8.0%, respectively, χ(2) = 184.80, p < .0001), suggesting that readers are not visually messaged in ways that would effectively promote skin cancer prevention and early detection behaviors.

  16. Using sensitivity derivatives for design and parameter estimation in an atmospheric plasma discharge simulation

    International Nuclear Information System (INIS)

    Lange, Kyle J.; Anderson, W. Kyle

    2010-01-01

    The problem of applying sensitivity analysis to a one-dimensional atmospheric radio frequency plasma discharge simulation is considered. A fluid simulation is used to model an atmospheric pressure radio frequency helium discharge with a small nitrogen impurity. Sensitivity derivatives are computed for the peak electron density with respect to physical inputs to the simulation. These derivatives are verified using several different methods to compute sensitivity derivatives. It is then demonstrated how sensitivity derivatives can be used within a design cycle to change these physical inputs so as to increase the peak electron density. It is also shown how sensitivity analysis can be used in conjunction with experimental data to obtain better estimates for rate and transport parameters. Finally, it is described how sensitivity analysis could be used to compute an upper bound on the uncertainty for results from a simulation.

  17. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  18. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  19. Sediment delivery estimates in water quality models altered by resolution and source of topographic data.

    Science.gov (United States)

    Beeson, Peter C; Sadeghi, Ali M; Lang, Megan W; Tomer, Mark D; Daughtry, Craig S T

    2014-01-01

    Moderate-resolution (30-m) digital elevation models (DEMs) are normally used to estimate slope for the parameterization of non-point source, process-based water quality models. These models, such as the Soil and Water Assessment Tool (SWAT), use the Universal Soil Loss Equation (USLE) and Modified USLE to estimate sediment loss. The slope length and steepness factor, a critical parameter in USLE, significantly affects sediment loss estimates. Depending on slope range, a twofold difference in slope estimation potentially results in as little as 50% change or as much as 250% change in the LS factor and subsequent sediment estimation. Recently, the availability of much finer-resolution (∼3 m) DEMs derived from Light Detection and Ranging (LiDAR) data has increased. However, the use of these data may not always be appropriate because slope values derived from fine spatial resolution DEMs are usually significantly higher than slopes derived from coarser DEMs. This increased slope results in considerable variability in modeled sediment output. This paper addresses the implications of parameterizing models using slope values calculated from DEMs with different spatial resolutions (90, 30, 10, and 3 m) and sources. Overall, we observed over a 2.5-fold increase in slope when using a 3-m instead of a 90-m DEM, which increased modeled soil loss using the USLE calculation by 130%. Care should be taken when using LiDAR-derived DEMs to parameterize water quality models because doing so can result in significantly higher slopes, which considerably alter modeled sediment loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    . In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...... for the estimator stating that its limiting distribution is multivariate stable. The rate of consistency of the estimator is slower than √Τ (as obtained by the quasi-maximum likelihood estimator) and depends on the tails of the data generating process....

  1. Evaluation of radar-derived precipitation estimates using runoff simulation : report for the NFR Energy Norway funded project 'Utilisation of weather radar data in atmospheric and hydrological models'

    Energy Technology Data Exchange (ETDEWEB)

    Abdella, Yisak; Engeland, Kolbjoern; Lepioufle, Jean-Marie

    2012-11-01

    This report presents the results from the project called 'Utilisation of weather radar data in atmospheric and hydrological models' funded by NFR and Energy Norway. Three precipitation products (radar-derived, interpolated and combination of the two) were generated as input for hydrological models. All the three products were evaluated by comparing the simulated and observed runoff at catchments. In order to expose any bias in the precipitation inputs, no precipitation correction factors were applied. Three criteria were used to measure the performance: Nash, correlation coefficient, and bias. The results shows that the simulations with the combined precipitation input give the best performance. We also see that the radar-derived precipitation estimates give reasonable runoff simulation even without a region specific parameters for the Z-R relationship. All the three products resulted in an underestimation of the estimated runoff, revealing a systematic bias in measurements (e.g. catch deficit, orographic effects, Z-R relationships) that can be improved. There is an important potential of using radar-derived precipitation for simulation of runoff, especially in catchments without precipitation gauges inside.(Author)

  2. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  3. The complementary relationship in estimation of regional evapotranspiration: An enhanced Advection-Aridity model

    Science.gov (United States)

    Michael T. Hobbins; Jorge A. Ramirez; Thomas C. Brown

    2001-01-01

    Long-term monthly evapotranspiration estimates from Brutsaert and Stricker’s Advection-Aridity model were compared with independent estimates of evapotranspiration derived from long-term water balances for 139 undisturbed basins across the conterminous United States. On an average annual basis for the period 1962-1988 the original model, which uses a Penman wind...

  4. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  5. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  6. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  7. Modelling rainfall interception by forests: a new method for estimating the canopy storage capacity

    Science.gov (United States)

    Pereira, Fernando; Valente, Fernanda; Nóbrega, Cristina

    2015-04-01

    Evaporation of rainfall intercepted by forests is usually an important part of a catchment water balance. Recognizing the importance of interception loss, several models of the process have been developed. A key parameter of these models is the canopy storage capacity (S), commonly estimated by the so-called Leyton method. However, this method is somewhat subjective in the selection of the storms used to derive S, which is particularly critical when throughfall is highly variable in space. To overcome these problems, a new method for estimating S was proposed in 2009 by Pereira et al. (Agricultural and Forest Meteorology, 149: 680-688), which uses information from a larger number of storms, is less sensitive to throughfall spatial variability and is consistent with the formulation of the two most widely used rainfall interception models, Gash analytical model and Rutter model. However, this method has a drawback: it does not account for stemflow (Sf). To allow a wider use of this methodology, we propose now a revised version which makes the estimation of S independent of the importance of stemflow. For the application of this new version we only need to establish a linear regression of throughfall vs. gross rainfall using data from all storms large enough to saturate the canopy. Two of the parameters used by the Gash and the Rutter models, pd (the drainage partitioning coefficient) and S, are then derived from the regression coefficients: pd is firstly estimated allowing then the derivation of S but, if Sf is not considered, S can be estimated making pd= 0. This new method was tested using data from a eucalyptus plantation, a maritime pine forest and a traditional olive grove, all located in Central Portugal. For both the eucalyptus and the pine forests pd and S estimated by this new approach were comparable to the values derived in previous studies using the standard procedures. In the case of the traditional olive grove, the estimates obtained by this methodology

  8. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  9. Ability of LANDSAT-8 Oli Derived Texture Metrics in Estimating Aboveground Carbon Stocks of Coppice Oak Forests

    Science.gov (United States)

    Safari, A.; Sohrabi, H.

    2016-06-01

    The role of forests as a reservoir for carbon has prompted the need for timely and reliable estimation of aboveground carbon stocks. Since measurement of aboveground carbon stocks of forests is a destructive, costly and time-consuming activity, aerial and satellite remote sensing techniques have gained many attentions in this field. Despite the fact that using aerial data for predicting aboveground carbon stocks has been proved as a highly accurate method, there are challenges related to high acquisition costs, small area coverage, and limited availability of these data. These challenges are more critical for non-commercial forests located in low-income countries. Landsat program provides repetitive acquisition of high-resolution multispectral data, which are freely available. The aim of this study was to assess the potential of multispectral Landsat 8 Operational Land Imager (OLI) derived texture metrics in quantifying aboveground carbon stocks of coppice Oak forests in Zagros Mountains, Iran. We used four different window sizes (3×3, 5×5, 7×7, and 9×9), and four different offsets ([0,1], [1,1], [1,0], and [1,-1]) to derive nine texture metrics (angular second moment, contrast, correlation, dissimilar, entropy, homogeneity, inverse difference, mean, and variance) from four bands (blue, green, red, and infrared). Totally, 124 sample plots in two different forests were measured and carbon was calculated using species-specific allometric models. Stepwise regression analysis was applied to estimate biomass from derived metrics. Results showed that, in general, larger size of window for deriving texture metrics resulted models with better fitting parameters. In addition, the correlation of the spectral bands for deriving texture metrics in regression models was ranked as b4>b3>b2>b5. The best offset was [1,-1]. Amongst the different metrics, mean and entropy were entered in most of the regression models. Overall, different models based on derived texture metrics

  10. On algebraic time-derivative estimation and deadbeat state reconstruction

    DEFF Research Database (Denmark)

    Reger, Johann; Jouffroy, Jerome

    2009-01-01

    This paper places into perspective the so-called algebraic time-derivative estimation method recently introduced by Fliess and co-authors with standard results from linear statespace theory for control systems. In particular, it is shown that the algebraic method can essentially be seen...

  11. Parameter estimation and analysis of an automotive heavy-duty SCR catalyst model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2017-01-01

    A single channel model for a heavy-duty SCR catalyst was derived based on first principles. The model considered heat and mass transfer between the channel gas phase and the wash coat phase. The parameters of the kinetic model were estimated using bench-scale monolith isothermal data. Validation ...

  12. Comparison between remote sensing and a dynamic vegetation model for estimating terrestrial primary production of Africa.

    Science.gov (United States)

    Ardö, Jonas

    2015-12-01

    Africa is an important part of the global carbon cycle. It is also a continent facing potential problems due to increasing resource demand in combination with climate change-induced changes in resource supply. Quantifying the pools and fluxes constituting the terrestrial African carbon cycle is a challenge, because of uncertainties in meteorological driver data, lack of validation data, and potentially uncertain representation of important processes in major ecosystems. In this paper, terrestrial primary production estimates derived from remote sensing and a dynamic vegetation model are compared and quantified for major African land cover types. Continental gross primary production estimates derived from remote sensing were higher than corresponding estimates derived from a dynamic vegetation model. However, estimates of continental net primary production from remote sensing were lower than corresponding estimates from the dynamic vegetation model. Variation was found among land cover classes, and the largest differences in gross primary production were found in the evergreen broadleaf forest. Average carbon use efficiency (NPP/GPP) was 0.58 for the vegetation model and 0.46 for the remote sensing method. Validation versus in situ data of aboveground net primary production revealed significant positive relationships for both methods. A combination of the remote sensing method with the dynamic vegetation model did not strongly affect this relationship. Observed significant differences in estimated vegetation productivity may have several causes, including model design and temperature sensitivity. Differences in carbon use efficiency reflect underlying model assumptions. Integrating the realistic process representation of dynamic vegetation models with the high resolution observational strength of remote sensing may support realistic estimation of components of the carbon cycle and enhance resource monitoring, providing suitable validation data is available.

  13. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  14. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  15. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  16. Estimation of biogenic emissions with satellite-derived land use and land cover data for air quality modeling of Houston-Galveston ozone nonattainment area.

    Science.gov (United States)

    Byun, Daewon W; Kim, Soontae; Czader, Beata; Nowak, David; Stetson, Stephen; Estes, Mark

    2005-06-01

    The Houston-Galveston Area (HGA) is one of the most severe ozone non-attainment regions in the US. To study the effectiveness of controlling anthropogenic emissions to mitigate regional ozone nonattainment problems, it is necessary to utilize adequate datasets describing the environmental conditions that influence the photochemical reactivity of the ambient atmosphere. Compared to the anthropogenic emissions from point and mobile sources, there are large uncertainties in the locations and amounts of biogenic emissions. For regional air quality modeling applications, biogenic emissions are not directly measured but are usually estimated with meteorological data such as photo-synthetically active solar radiation, surface temperature, land type, and vegetation database. In this paper, we characterize these meteorological input parameters and two different land use land cover datasets available for HGA: the conventional biogenic vegetation/land use data and satellite-derived high-resolution land cover data. We describe the procedures used for the estimation of biogenic emissions with the satellite derived land cover data and leaf mass density information. Air quality model simulations were performed using both the original and the new biogenic emissions estimates. The results showed that there were considerable uncertainties in biogenic emissions inputs. Subsequently, ozone predictions were affected up to 10 ppb, but the magnitudes and locations of peak ozone varied each day depending on the upwind or downwind positions of the biogenic emission sources relative to the anthropogenic NOx and VOC sources. Although the assessment had limitations such as heterogeneity in the spatial resolutions, the study highlighted the significance of biogenic emissions uncertainty on air quality predictions. However, the study did not allow extrapolation of the directional changes in air quality corresponding to the changes in LULC because the two datasets were based on vastly different

  17. The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates

    Science.gov (United States)

    Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.

    2018-02-01

    We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.

  18. Spatio-temporal Root Zone Soil Moisture Estimation for Indo - Gangetic Basin from Satellite Derived (AMSR-2 and SMOS) Surface Soil Moisture

    Science.gov (United States)

    Sure, A.; Dikshit, O.

    2017-12-01

    Root zone soil moisture (RZSM) is an important element in hydrology and agriculture. The estimation of RZSM provides insight in selecting the appropriate crops for specific soil conditions (soil type, bulk density, etc.). RZSM governs various vadose zone phenomena and subsequently affects the groundwater processes. With various satellite sensors dedicated to estimating surface soil moisture at different spatial and temporal resolutions, estimation of soil moisture at root zone level for Indo - Gangetic basin which inherits complex heterogeneous environment, is quite challenging. This study aims at estimating RZSM and understand its variation at the level of Indo - Gangetic basin with changing land use/land cover, topography, crop cycles, soil properties, temperature and precipitation patterns using two satellite derived soil moisture datasets operating at distinct frequencies with different principles of acquisition. Two surface soil moisture datasets are derived from AMSR-2 (6.9 GHz - `C' Band) and SMOS (1.4 GHz - `L' band) passive microwave sensors with coarse spatial resolution. The Soil Water Index (SWI), accounting for soil moisture from the surface, is derived by considering a theoretical two-layered water balance model and contributes in ascertaining soil moisture at the vadose zone. This index is evaluated against the widely used modelled soil moisture dataset of GLDAS - NOAH, version 2.1. This research enhances the domain of utilising the modelled soil moisture dataset, wherever the ground dataset is unavailable. The coupling between the surface soil moisture and RZSM is analysed for two years (2015-16), by defining a parameter T, the characteristic time length. The study demonstrates that deriving an optimal value of T for estimating SWI at a certain location is a function of various factors such as land, meteorological, and agricultural characteristics.

  19. Generation of insulin-producing cells from human bone marrow-derived mesenchymal stem cells: comparison of three differentiation protocols.

    Science.gov (United States)

    Gabr, Mahmoud M; Zakaria, Mahmoud M; Refaie, Ayman F; Khater, Sherry M; Ashamallah, Sylvia A; Ismail, Amani M; El-Badri, Nagwa; Ghoneim, Mohamed A

    2014-01-01

    Many protocols were utilized for directed differentiation of mesenchymal stem cells (MSCs) to form insulin-producing cells (IPCs). We compared the relative efficiency of three differentiation protocols. Human bone marrow-derived MSCs (HBM-MSCs) were obtained from three insulin-dependent type 2 diabetic patients. Differentiation into IPCs was carried out by three protocols: conophylline-based (one-step protocol), trichostatin-A-based (two-step protocol), and β -mercaptoethanol-based (three-step protocol). At the end of differentiation, cells were evaluated by immunolabeling for insulin production, expression of pancreatic endocrine genes, and release of insulin and c-peptide in response to increasing glucose concentrations. By immunolabeling, the proportion of generated IPCs was modest ( ≃ 3%) in all the three protocols. All relevant pancreatic endocrine genes, insulin, glucagon, and somatostatin, were expressed. There was a stepwise increase in insulin and c-peptide release in response to glucose challenge, but the released amounts were low when compared with those of pancreatic islets. The yield of functional IPCs following directed differentiation of HBM-MSCs was modest and was comparable among the three tested protocols. Protocols for directed differentiation of MSCs need further optimization in order to be clinically meaningful. To this end, addition of an extracellular matrix and/or a suitable template should be attempted.

  20. Numerical estimation of aircrafts' unsteady lateral-directional stability derivatives

    Directory of Open Access Journals (Sweden)

    Maričić N.L.

    2006-01-01

    Full Text Available A technique for predicting steady and oscillatory aerodynamic loads on general configuration has been developed. The prediction is based on the Doublet-Lattice Method, Slender Body Theory and Method of Images. The chord and span wise loading on lifting surfaces and longitudinal bodies (in horizontal and vertical plane load distributions are determined. The configuration may be composed of an assemblage of lifting surfaces (with control surfaces and bodies (with circular cross sections and a longitudinal variation of radius. Loadings predicted by this method are used to calculate (estimate steady and unsteady (dynamic lateral-directional stability derivatives. The short outline of the used methods is given in [1], [2], [3], [4] and [5]. Applying the described methodology software DERIV is developed. The obtained results from DERIV are compared to NASTRAN examples HA21B and HA21D from [4]. In the first example (HA21B, the jet transport wing (BAH wing is steady rolling and lateral stability derivatives are determined. In the second example (HA21D, lateral-directional stability derivatives are calculated for forward- swept-wing (FSW airplane in antisymmetric quasi-steady maneuvers. Acceptable agreement is achieved comparing the results from [4] and DERIV.

  1. Estimating Error in SRTM Derived Planform of a River in Data-poor Region and Subsequent Impact on Inundation Modeling

    Science.gov (United States)

    Bhuyian, M. N. M.; Kalyanapu, A. J.

    2017-12-01

    Accurate representation of river planform is critical for hydrodynamic modeling. Digital elevation models (DEM) often falls short in accurately representing river planform because they show the ground as it was during data acquisition. But, water bodies (i.e. rivers) change their size and shape over time. River planforms are more dynamic in undisturbed riverine systems (mostly located in data-poor regions) where remote sensing is the most convenient source of data. For many of such regions, Shuttle Radar Topographic Mission (SRTM) is the best available source of DEM. Therefore, the objective of this study is to estimate the error in SRTM derived planform of a river in a data-poor region and estimate the subsequent impact on inundation modeling. Analysis of Landsat image, SRTM DEM and remotely sensed soil data was used to classify the planform activity in an 185 km stretch of the Kushiyara River in Bangladesh. In last 15 years, the river eroded about 4.65 square km and deposited 7.55 square km area. Therefore, current (the year 2017) river planform is significantly different than the SRTM water body data which represents the time of SRTM data acquisition (the year 2000). The rate of planform shifting significantly increased as the river traveled to downstream. Therefore, the study area was divided into three reaches (R1, R2, and R3) from upstream to downstream. Channel slope and meandering ratio changed from 2x10-7 and 1.64 in R1 to 1x10-4 and 1.45 in R3. However, more than 60% erosion-deposition occurred in R3 where a high percentage of Fluvisols (98%) and coarse particles (21%) were present in the vicinity of the river. It indicates errors in SRTM water body data (due to planform shifting) could be correlated with the physical properties (i.e. slope, soil type, meandering ratio etc.) of the riverine system. The correlations would help in zoning activity of a riverine system and determine a timeline to update DEM for a given region. Additionally, to estimate the

  2. Assessing the external validity of model-based estimates of the incidence of heart attack in England: a modelling study

    Directory of Open Access Journals (Sweden)

    Peter Scarborough

    2016-11-01

    Full Text Available Abstract Background The DisMod II model is designed to estimate epidemiological parameters on diseases where measured data are incomplete and has been used to provide estimates of disease incidence for the Global Burden of Disease study. We assessed the external validity of the DisMod II model by comparing modelled estimates of the incidence of first acute myocardial infarction (AMI in England in 2010 with estimates derived from a linked dataset of hospital records and death certificates. Methods Inputs for DisMod II were prevalence rates of ever having had an AMI taken from a population health survey, total mortality rates and AMI mortality rates taken from death certificates. By definition, remission rates were zero. We estimated first AMI incidence in an external dataset from England in 2010 using a linked dataset including all hospital admissions and death certificates since 1998. 95 % confidence intervals were derived around estimates from the external dataset and DisMod II estimates based on sampling variance and reported uncertainty in prevalence estimates respectively. Results Estimates of the incidence rate for the whole population were higher in the DisMod II results than the external dataset (+54 % for men and +26 % for women. Age-specific results showed that the DisMod II results over-estimated incidence for all but the oldest age groups. Confidence intervals for the DisMod II and external dataset estimates did not overlap for most age groups. Conclusion By comparison with AMI incidence rates in England, DisMod II did not achieve external validity for age-specific incidence rates, but did provide global estimates of incidence that are of similar magnitude to measured estimates. The model should be used with caution when estimating age-specific incidence rates.

  3. Uncertainty estimation of the velocity model for the TrigNet GPS network

    Science.gov (United States)

    Hackl, Matthias; Malservisi, Rocco; Hugentobler, Urs; Wonnacott, Richard

    2010-05-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is quite demanding and are usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies.

  4. Application of the Health Belief Model to customers' use of menu labels in restaurants.

    Science.gov (United States)

    Jeong, Jin-Yi; Ham, Sunny

    2018-04-01

    Some countries require the provision of menu labels on restaurant menus to fight the increasing prevalence of obesity and related chronic diseases. This study views customers' use of menu labels as a preventive health behavior and applies the Health Belief Model (HBM) with the aim of determining the health belief factors that influence customers' use of menu labels. A self-administered survey was distributed for data collection. Responses were collected from 335 restaurant customers who experienced menu labels in restaurants within three months prior to the survey. The results of a structural equation model showed that all the HBM variables (perceived threats, perceived benefits, and perceived barriers of using menu labels) positively affected the customers' use of menu labels. Perceived threats were influenced by cues to action and cues to action had an indirect influence on menu label use through perceived threats. In conclusion, health beliefs were good predictors of menu label use on restaurant menus. This study validated the application of the HBM to menu labeling in restaurants, and its findings could offer guidelines for the industry and government in developing strategies to expand the use of menu labels among the public. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A critical view on temperature modelling for application in weather derivatives markets

    International Nuclear Information System (INIS)

    Šaltytė Benth, Jūratė; Benth, Fred Espen

    2012-01-01

    In this paper we present a stochastic model for daily average temperature. The model contains seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. The model is estimated on daily average temperature records from Stockholm (Sweden). By comparing the proposed model with the popular model of Campbell and Diebold (2005), we point out some important issues to be addressed when modelling the temperature for application in weather derivatives market. - Highlights: ► We present a stochastic model for daily average temperature, containing seasonality, a low-order autoregressive component and a variance describing the heteroskedastic residuals. ► We compare the proposed model with the popular model of Campbell and Diebold (2005). ► Some important issues to be addressed when modelling the temperature for application in weather derivatives market are pointed out.

  6. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardgree, Stuart; Strand, Eva

    2013-01-01

    Woody encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p 2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 – 143.6 kg and 0.5 – 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  7. On the representation of aerosol activation and its influence on model-derived estimates of the aerosol indirect effect

    Science.gov (United States)

    Rothenberg, Daniel; Avramov, Alexander; Wang, Chien

    2018-06-01

    Interactions between aerosol particles and clouds contribute a great deal of uncertainty to the scientific community's understanding of anthropogenic climate forcing. Aerosol particles serve as the nucleation sites for cloud droplets, establishing a direct linkage between anthropogenic particulate emissions and clouds in the climate system. To resolve this linkage, the community has developed parameterizations of aerosol activation which can be used in global climate models to interactively predict cloud droplet number concentrations (CDNCs). However, different activation schemes can exhibit different sensitivities to aerosol perturbations in different meteorological or pollution regimes. To assess the impact these different sensitivities have on climate forcing, we have coupled three different core activation schemes and variants with the CESM-MARC (two-Moment, Multi-Modal, Mixing-state-resolving Aerosol model for Research of Climate (MARC) coupled with the National Center for Atmospheric Research's (NCAR) Community Earth System Model (CESM; version 1.2)). Although the model produces a reasonable present-day CDNC climatology when compared with observations regardless of the scheme used, ΔCDNCs between the present and preindustrial era regionally increase by over 100 % in zonal mean when using the most sensitive parameterization. These differences in activation sensitivity may lead to a different evolution of the model meteorology, and ultimately to a spread of over 0.8 W m-2 in global average shortwave indirect effect (AIE) diagnosed from the model, a range which is as large as the inter-model spread from the AeroCom intercomparison. Model-derived AIE strongly scales with the simulated preindustrial CDNC burden, and those models with the greatest preindustrial CDNC tend to have the smallest AIE, regardless of their ΔCDNC. This suggests that present-day evaluations of aerosol-climate models may not provide useful constraints on the magnitude of the AIE, which

  8. Deriving albedo maps for HAPEX-Sahel from ASAS data using kernel-driven BRDF models

    Directory of Open Access Journals (Sweden)

    P. Lewis

    1999-01-01

    Full Text Available This paper describes the application and testing of a method for deriving spatial estimates of albedo from multi-angle remote sensing data. Linear kernel-driven models of surface bi-directional reflectance have been inverted against high spatial resolution multi-angular, multi- spectral airborne data of the principal cover types within the HAPEX-Sahel study site in Niger, West Africa. The airborne data are obtained from the NASA Airborne Solid-state Imaging Spectrometer (ASAS instrument, flown in Niger in September and October 1992. The maps of model parameters produced are used to estimate integrated reflectance properties related to spectral albedo. Broadband albedo has been estimated from this by weighting the spectral albedo for each pixel within the map as a function of the appropriate spectral solar irradiance and proportion of direct and diffuse illumination. Partial validation of the results was performed by comparing ASAS reflectance and derived directional-hemispherical reflectance with simulations of a millet canopy made with a complex geometric canopy reflectance model, the Botanical Plant Modelling System (BPMS. Both were found to agree well in magnitude. Broadband albedo values derived from the ASAS data were compared with ground-based (point sample albedo measurements and found to agree extremely well. These results indicate that the linear kernel-driven modelling approach, which is to be used operationally to produce global 16 day, 1 km albedo maps from forthcoming NASA Earth Observing System spaceborne data, is both sound and practical for the estimation of angle-integrated spectral reflectance quantities related to albedo. Results for broadband albedo are dependent on spectral sampling and on obtaining the correct spectral weigthings.

  9. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  10. Real-Time Algebraic Derivative Estimations Using a Novel Low-Cost Architecture Based on Reconfigurable Logic

    Science.gov (United States)

    Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos

    2014-01-01

    Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033

  11. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  12. Assimilating satellite-based canopy height within an ecosystem model to estimate aboveground forest biomass

    Science.gov (United States)

    Joetzjer, E.; Pillet, M.; Ciais, P.; Barbier, N.; Chave, J.; Schlund, M.; Maignan, F.; Barichivich, J.; Luyssaert, S.; Hérault, B.; von Poncet, F.; Poulter, B.

    2017-07-01

    Despite advances in Earth observation and modeling, estimating tropical biomass remains a challenge. Recent work suggests that integrating satellite measurements of canopy height within ecosystem models is a promising approach to infer biomass. We tested the feasibility of this approach to retrieve aboveground biomass (AGB) at three tropical forest sites by assimilating remotely sensed canopy height derived from a texture analysis algorithm applied to the high-resolution Pleiades imager in the Organizing Carbon and Hydrology in Dynamic Ecosystems Canopy (ORCHIDEE-CAN) ecosystem model. While mean AGB could be estimated within 10% of AGB derived from census data in average across sites, canopy height derived from Pleiades product was spatially too smooth, thus unable to accurately resolve large height (and biomass) variations within the site considered. The error budget was evaluated in details, and systematic errors related to the ORCHIDEE-CAN structure contribute as a secondary source of error and could be overcome by using improved allometric equations.

  13. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  14. M-Estimators of Roughness and Scale for -Modelled SAR Imagery

    Directory of Open Access Journals (Sweden)

    Frery Alejandro C

    2002-01-01

    Full Text Available The GA0 distribution is assumed as the universal model for multilook amplitude SAR imagery data under the multiplicative model. This distribution has two unknown parameters related to the roughness and the scale of the signal, that can be used in image analysis and processing. It can be seen that maximum likelihood and moment estimators for its parameters can be influenced by small percentages of "outliers"; hence, it is of outmost importance to find robust estimators for these parameters. One of the best-known classes of robust techniques is that of M-estimators, which are an extension of the maximum likelihood estimation method. In this work we derive the M-estimators for the parameters of the distribution, and compare them with maximum likelihood estimators with a Monte-Carlo experience. It is checked that this robust technique is superior to the classical approach under the presence of corner reflectors, a common source of contamination in SAR images. Numerical issues are addressed, and a practical example is provided.

  15. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    Science.gov (United States)

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  16. Estimate of radiocaesium derived FNPP1 accident in the North Pacific Ocean

    Science.gov (United States)

    Inomata, Yayoi; Aoyama, Michio; Tsubono, Takaki; Tsumune, Daisuke; Yamada, Masatoshi

    2017-04-01

    134Cs and 137Cs (radiocaesium) were released to the North Pacific Ocean by direct discharge and atmospheric deposition released from the TEPCO Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident in 2011. After the FNPP1 accident, measurements of 134Cs and 137Cs were conducted by many researches. However, those results are only snapshots in order to interpret the distribution and transport of the released radiocaesium on a basin scale. It is recognized that estimation of the total amount of released 134Cs and 137Cs is necessary to assess the radioecological impacts of their release on the environment. It was reported that the inventory of 134Cs or 137Cs on the North Pacific Ocean after the FNPP1 accident was 15.2-18.3 PBq based on the observations (Aoyama et al., 2016a), 15.3±1.6 PBq by OI analysis (Inomata et al., 2016), 16.1±1.64 PBq by global ocean model (Tsubono et al., 2016). These suggest that more than 75 % of the atmospheric-released radiocaesium (15.2-20.4 PBq; Aoyama et al., 2016a) were deposited on the North Pacific Ocean. The radiocaesium from the atmospheric fallout and direct discharge were expected to mixing as well as diluting near the coastal region and transported eastward across the North Pacific Ocean in the surface layer. Furthermore, radicaesium were rapidly mixed and penetrated into the subsurface water in the North Pacific Ocean in winter. It was revealed that these radiocaesium existed in the Subtropical Mode Water (STMW, Aoyama et al., 2016b; Kaeriyama et al., 2016) and Central Mode Water (CMW, Aoyama et al., 2016b), suggesting that mode water formation and subduction are efficient pathway for the transport of FNPP1 derived radiocaesium into the ocean interior within 1-year timescale. Kaeriyama et al. (2016) estimated the total amount of FNPP1 derived radiocaesium in the STMW was 4.2 ± 1.1 PBq in October-November 2012. However, there is no estimation of the amount of radiocaesium in the CMW. Therefore, it is impossible to discuss

  17. Estimating total evaporation at the field scale using the SEBS model ...

    African Journals Online (AJOL)

    Estimating total evaporation at the field scale using the SEBS model and data infilling ... of two infilling techniques to create a daily satellite-derived ET time series. ... and produced R2 and RMSE values of 0.33 and 2.19 mm∙d-1, respectively, ...

  18. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    Science.gov (United States)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardegree, Stuart; Strand, Eva

    2013-07-01

    encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p < 0.001, RMSE = 0.58 kg). The predicted mean aboveground woody carbon storage for the study area was 677 g/m2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 - 143.6 kg and 0.5 - 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  19. Estimation of Sway Velocity-Dependent Hydrodynamic Derivatives in Surface Ship Manoeuvring Using Ranse Based CFD

    Directory of Open Access Journals (Sweden)

    Sheeja Janardhanan

    2010-09-01

    Full Text Available The hydrodynamic derivatives appearing in the manoeuvring equations of motion are the primary parameters in the prediction of the trajectory of a vessel. Determination of these derivatives poses major challenge in ship manoeuvring related problems. This paper deals with one such problem in which an attempt has been made to numerically simulate the conventional straight line test in a towing tank using computational fluid dynamics (CFD. Free-surface effects have been neglected here. The domain size has been fixed as per ITTC guide lines. The grid size has been fixed after a thorough grid independency analysis and an optimum grid size has been chosen in order to ensure the insensitivity of the flow parameters to grid size and also to have reduced computational effort. The model has been oriented to wider range of drift angles to capture the non-linear effects and subsequently the forces and moments acting on the model in each angle have been estimated. The sway velocity dependent derivatives have been obtained through plots and curve-fits. The effect of finite water depth on the derivatives has also been looked into. The results have been compared with the available experimental and empirical values and the method was found to be promising.

  20. Integrating health belief model and technology acceptance model: an investigation of health-related internet use.

    Science.gov (United States)

    Ahadzadeh, Ashraf Sadat; Pahlevan Sharif, Saeed; Ong, Fon Sim; Khong, Kok Wei

    2015-02-19

    Today, people use the Internet to satisfy health-related information and communication needs. In Malaysia, Internet use for health management has become increasingly significant due to the increase in the incidence of chronic diseases, in particular among urban women and their desire to stay healthy. Past studies adopted the Technology Acceptance Model (TAM) and Health Belief Model (HBM) independently to explain Internet use for health-related purposes. Although both the TAM and HBM have their own merits, independently they lack the ability to explain the cognition and the related mechanism in which individuals use the Internet for health purposes. This study aimed to examine the influence of perceived health risk and health consciousness on health-related Internet use based on the HBM. Drawing on the TAM, it also tested the mediating effects of perceived usefulness of the Internet for health information and attitude toward Internet use for health purposes for the relationship between health-related factors, namely perceived health risk and health consciousness on health-related Internet use. Data obtained for the current study were collected using purposive sampling; the sample consisted of women in Malaysia who had Internet access. The partial least squares structural equation modeling method was used to test the research hypotheses developed. Perceived health risk (β=.135, t1999=2.676) and health consciousness (β=.447, t1999=9.168) had a positive influence on health-related Internet use. Moreover, perceived usefulness of the Internet and attitude toward Internet use for health-related purposes partially mediated the influence of health consciousness on health-related Internet use (β=.025, t1999=3.234), whereas the effect of perceived health risk on health-related Internet use was fully mediated by perceived usefulness of the Internet and attitude (β=.029, t1999=3.609). These results suggest the central role of perceived usefulness of the Internet and

  1. Negative binomial models for abundance estimation of multiple closed populations

    Science.gov (United States)

    Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.

    2001-01-01

    Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.

  2. Estimation of aircraft aerodynamic derivatives using Extended Kalman Filter

    OpenAIRE

    Curvo, M.

    2000-01-01

    Design of flight control laws, verification of performance predictions, and the implementation of flight simulations are tasks that require a mathematical model of the aircraft dynamics. The dynamical models are characterized by coefficients (aerodynamic derivatives) whose values must be determined from flight tests. This work outlines the use of the Extended Kalman Filter (EKF) in obtaining the aerodynamic derivatives of an aircraft. The EKF shows several advantages over the more traditional...

  3. Parameter estimation in nonlinear models for pesticide degradation

    International Nuclear Information System (INIS)

    Richter, O.; Pestemer, W.; Bunte, D.; Diekkrueger, B.

    1991-01-01

    A wide class of environmental transfer models is formulated as ordinary or partial differential equations. With the availability of fast computers, the numerical solution of large systems became feasible. The main difficulty in performing a realistic and convincing simulation of the fate of a substance in the biosphere is not the implementation of numerical techniques but rather the incomplete data basis for parameter estimation. Parameter estimation is a synonym for statistical and numerical procedures to derive reasonable numerical values for model parameters from data. The classical method is the familiar linear regression technique which dates back to the 18th century. Because it is easy to handle, linear regression has long been established as a convenient tool for analysing relationships. However, the wide use of linear regression has led to an overemphasis of linear relationships. In nature, most relationships are nonlinear and linearization often gives a poor approximation of reality. Furthermore, pure regression models are not capable to map the dynamics of a process. Therefore, realistic models involve the evolution in time (and space). This leads in a natural way to the formulation of differential equations. To establish the link between data and dynamical models, numerical advanced parameter identification methods have been developed in recent years. This paper demonstrates the application of these techniques to estimation problems in the field of pesticide dynamics. (7 refs., 5 figs., 2 tabs.)

  4. Qualitative to quantitative : linked trajectory of method triangulation in a study on HIV/AIDS in Goa, India

    NARCIS (Netherlands)

    Bailey, Ajay; Hutter, Inge

    2008-01-01

    With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is

  5. Deriving global parameter estimates for the Noah land surface model using FLUXNET and machine learning

    Science.gov (United States)

    Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.

    2016-11-01

    With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.

  6. The health belief model and number of peers with internet addiction as inter-related factors of Internet addiction among secondary school students in Hong Kong.

    Science.gov (United States)

    Wang, Yanhong; Wu, Anise M S; Lau, Joseph T F

    2016-03-16

    Students are vulnerable to Internet addiction (IA). Influences of cognitions based on the Health Belief Model (HBM) and perceived number of peers with IA (PNPIA) affecting students' IA, and mediating effects involved, have not been investigated. This cross-sectional study surveyed 9518 Hong Kong Chinese secondary school students in the school setting. In this self-reported study, the majority (82.6%) reported that they had peers with IA. Based on the Chinese Internet Addiction Scale (cut-off =63/64), the prevalence of IA was 16.0% (males: 17.6%; females: 14.0%). Among the non-IA cases, 7.6% (males: 8.7%; females: 6.3%) perceived a chance of developing IA in the next 12 months. Concurring with the HBM, adjusted logistic analysis showed that the Perceived Social Benefits of Internet Use Scale (males: Adjusted odds ratio (ORa) = 1.19; females: ORa = 1.23), Perceived Barriers for Reducing Internet Use Scale (males: ORa = 1.26; females: ORa = 1.36), and Perceived Self-efficacy for Reducing Internet Use Scale (males: ORa = 0.66; females: ORa = 0.56) were significantly associated with IA. Similarly, PNPIA was significantly associated with IA ('quite a number': males: ORa = 2.85; females: ORa = 4.35; 'a large number': males: ORa = 3.90; females: ORa = 9.09). Controlling for these three constructs, PNPIA remained significant but the strength of association diminished ('quite a number': males: multivariate odds ratio (ORm) = 2.07; females: ORm = 2.44; 'a large number': males: ORm = 2.39; females: ORm = 3.56). Hence, the association between PNPIA and IA was partially mediated (explained) by the three HBM constructs. Interventions preventing IA should change these constructs. In sum, prevalence of IA was relatively high and was associated with some HBM constructs and PNPIA, and PNPIA also partially mediated associations between HBM constructs and IA. Huge challenges are expected, as social relationships and an imbalance of cost-benefit for reducing Internet use are

  7. Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon

    Science.gov (United States)

    Nelson, Ross

    2008-01-01

    ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.

  8. Accounting for misclassification in electronic health records-derived exposures using generalized linear finite mixture models.

    Science.gov (United States)

    Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M

    2017-06-01

    Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.

  9. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  10. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    Science.gov (United States)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  11. Urban scale air quality modelling using detailed traffic emissions estimates

    Science.gov (United States)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  12. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  13. Generation of Insulin-Producing Cells from Human Bone Marrow-Derived Mesenchymal Stem Cells: Comparison of Three Differentiation Protocols

    Directory of Open Access Journals (Sweden)

    Mahmoud M. Gabr

    2014-01-01

    Full Text Available Introduction. Many protocols were utilized for directed differentiation of mesenchymal stem cells (MSCs to form insulin-producing cells (IPCs. We compared the relative efficiency of three differentiation protocols. Methods. Human bone marrow-derived MSCs (HBM-MSCs were obtained from three insulin-dependent type 2 diabetic patients. Differentiation into IPCs was carried out by three protocols: conophylline-based (one-step protocol, trichostatin-A-based (two-step protocol, and β-mercaptoethanol-based (three-step protocol. At the end of differentiation, cells were evaluated by immunolabeling for insulin production, expression of pancreatic endocrine genes, and release of insulin and c-peptide in response to increasing glucose concentrations. Results. By immunolabeling, the proportion of generated IPCs was modest (≃3% in all the three protocols. All relevant pancreatic endocrine genes, insulin, glucagon, and somatostatin, were expressed. There was a stepwise increase in insulin and c-peptide release in response to glucose challenge, but the released amounts were low when compared with those of pancreatic islets. Conclusion. The yield of functional IPCs following directed differentiation of HBM-MSCs was modest and was comparable among the three tested protocols. Protocols for directed differentiation of MSCs need further optimization in order to be clinically meaningful. To this end, addition of an extracellular matrix and/or a suitable template should be attempted.

  14. Estimation of parameters of constant elasticity of substitution production functional model

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  15. Factor structure and internal reliability of an exercise health belief model scale in a Mexican population

    Directory of Open Access Journals (Sweden)

    Oscar Armando Esparza-Del Villar

    2017-03-01

    Full Text Available Abstract Background Mexico is one of the countries with the highest rates of overweight and obesity around the world, with 68.8% of men and 73% of women reporting both. This is a public health problem since there are several health related consequences of not exercising, like having cardiovascular diseases or some types of cancers. All of these problems can be prevented by promoting exercise, so it is important to evaluate models of health behaviors to achieve this goal. Among several models the Health Belief Model is one of the most studied models to promote health related behaviors. This study validates the first exercise scale based on the Health Belief Model (HBM in Mexicans with the objective of studying and analyzing this model in Mexico. Methods Items for the scale called the Exercise Health Belief Model Scale (EHBMS were developed by a health research team, then the items were applied to a sample of 746 participants, male and female, from five cities in Mexico. The factor structure of the items was analyzed with an exploratory factor analysis and the internal reliability with Cronbach’s alpha. Results The exploratory factor analysis reported the expected factor structure based in the HBM. The KMO index (0.92 and the Barlett’s sphericity test (p < 0.01 indicated an adequate and normally distributed sample. Items had adequate factor loadings, ranging from 0.31 to 0.92, and the internal consistencies of the factors were also acceptable, with alpha values ranging from 0.67 to 0.91. Conclusions The EHBMS is a validated scale that can be used to measure exercise based on the HBM in Mexican populations.

  16. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  17. Precision Orbit Derived Atmospheric Density: Development and Performance

    Science.gov (United States)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer

  18. Uncertainty estimation of the velocity model for stations of the TrigNet GPS network

    Science.gov (United States)

    Hackl, M.; Malservisi, R.; Hugentobler, U.

    2010-12-01

    Satellite based geodetic techniques - above all GPS - provide an outstanding tool to measure crustal motions. They are widely used to derive geodetic velocity models that are applied in geodynamics to determine rotations of tectonic blocks, to localize active geological features, and to estimate rheological properties of the crust and the underlying asthenosphere. However, it is not a trivial task to derive GPS velocities and their uncertainties from positioning time series. In general time series are assumed to be represented by linear models (sometimes offsets, annual, and semi-annual signals are included) and noise. It has been shown that error models accounting only for white noise tend to underestimate the uncertainties of rates derived from long time series and that different colored noise components (flicker noise, random walk, etc.) need to be considered. However, a thorough error analysis including power spectra analyses and maximum likelihood estimates is computationally expensive and is usually not carried out for every site, but the uncertainties are scaled by latitude dependent factors. Analyses of the South Africa continuous GPS network TrigNet indicate that the scaled uncertainties overestimate the velocity errors. So we applied a method similar to the Allan Variance that is commonly used in the estimation of clock uncertainties and is able to account for time dependent probability density functions (colored noise) to the TrigNet time series. Comparisons with synthetic data show that the noise can be represented quite well by a power law model in combination with a seasonal signal in agreement with previous studies, which allows for a reliable estimation of the velocity error. Finally, we compared these estimates to the results obtained by spectral analyses using CATS. Small differences may originate from non-normal distribution of the noise.

  19. Deriving local demand for stumpage from estimates of regional supply and demand.

    Science.gov (United States)

    Kent P. Connaughton; Gerard A. Majerus; David H. Jackson

    1989-01-01

    The local (Forest-level or local-area) demand for stumpage can be derived from estimates of regional supply and demand. The derivation of local demand is justified when the local timber economy is similar to the regional timber economy; a simple regression of local on nonlocal prices can be used as an empirical test of similarity between local and regional economies....

  20. Modeling Anti-HIV Activity of HEPT Derivatives Revisited. Multiregression Models Are Not Inferior Ones

    International Nuclear Information System (INIS)

    Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono

    2007-01-01

    Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters

  1. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    Science.gov (United States)

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  2. Gridded rainfall estimation for distributed modeling in western mountainous areas

    Science.gov (United States)

    Moreda, F.; Cong, S.; Schaake, J.; Smith, M.

    2006-05-01

    Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall

  3. Models for the analytic estimation of low energy photon albedo

    International Nuclear Information System (INIS)

    Simovic, R.; Markovic, S.; Ljubenov, V.

    2005-01-01

    This paper shows some monoenergetic models for estimation of photon reflection in the energy range from 20 keV to 80 keV. Using the DP0 approximation of the H-function we have derived the analytic expressions of the η and R functions in purpose to facilitate photon reflection analyses as well as the radiation shield designee. (author) [sr

  4. Optically-derived estimates of phytoplankton size class and taxonomic group biomass in the Eastern Subarctic Pacific Ocean

    Science.gov (United States)

    Zeng, Chen; Rosengard, Sarah Z.; Burt, William; Peña, M. Angelica; Nemcek, Nina; Zeng, Tao; Arrigo, Kevin R.; Tortell, Philippe D.

    2018-06-01

    We evaluate several algorithms for the estimation of phytoplankton size class (PSC) and functional type (PFT) biomass from ship-based optical measurements in the Subarctic Northeast Pacific Ocean. Using underway measurements of particulate absorption and backscatter in surface waters, we derived estimates of PSC/PFT based on chlorophyll-a concentrations (Chl-a), particulate absorption spectra and the wavelength dependence of particulate backscatter. Optically-derived [Chl-a] and phytoplankton absorption measurements were validated against discrete calibration samples, while the derived PSC/PFT estimates were validated using size-fractionated Chl-a measurements and HPLC analysis of diagnostic photosynthetic pigments (DPA). Our results showflo that PSC/PFT algorithms based on [Chl-a] and particulate absorption spectra performed significantly better than the backscatter slope approach. These two more successful algorithms yielded estimates of phytoplankton size classes that agreed well with HPLC-derived DPA estimates (RMSE = 12.9%, and 16.6%, respectively) across a range of hydrographic and productivity regimes. Moreover, the [Chl-a] algorithm produced PSC estimates that agreed well with size-fractionated [Chl-a] measurements, and estimates of the biomass of specific phytoplankton groups that were consistent with values derived from HPLC. Based on these results, we suggest that simple [Chl-a] measurements should be more fully exploited to improve the classification of phytoplankton assemblages in the Northeast Pacific Ocean.

  5. Limited information estimation of the diffusion-based item response theory model for responses and response times.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2016-05-01

    Psychological tests are usually analysed with item response models. Recently, some alternative measurement models have been proposed that were derived from cognitive process models developed in experimental psychology. These models consider the responses but also the response times of the test takers. Two such models are the Q-diffusion model and the D-diffusion model. Both models can be calibrated with the diffIRT package of the R statistical environment via marginal maximum likelihood (MML) estimation. In this manuscript, an alternative approach to model calibration is proposed. The approach is based on weighted least squares estimation and parallels the standard estimation approach in structural equation modelling. Estimates are determined by minimizing the discrepancy between the observed and the implied covariance matrix. The estimator is simple to implement, consistent, and asymptotically normally distributed. Least squares estimation also provides a test of model fit by comparing the observed and implied covariance matrix. The estimator and the test of model fit are evaluated in a simulation study. Although parameter recovery is good, the estimator is less efficient than the MML estimator. © 2016 The British Psychological Society.

  6. Substituent effects in heterogeneous catalysis--4. Adsorption estimations during competitive hydrogenation of cyclohexanone and its 2-alkyl derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Chihara, T; Tanaka, K

    1979-02-01

    Adsorption estimations during competitive hydrogenation of cyclohexanone and its 2-alkyl derivatives alumina-supported ruthenium, rhodium, and platinum catalysts were obtained in a study to determine the relative contributions of the rate constants and the adsorption equilibrium constants to the substituent-dependent constant. The reaction rates obtained during competitive hydrogenation were in the order cyclohexanone (A) Vertical Bar3:Vertical Bar3: 2-methyl cyclohexanone (B) Vertical Bar3: 2-ethyl cyclohexanone (C) Vertical Bar3: 2-propyl cyclohexanone (D) for all catalysts, whereas the rates obtained during individual hydrogenation were in the order A Vertical Bar3: B approx. C approx. D. The adsorption equilibrium constants which were estimated by analyzing the kinetic data agreed well with the theoretical values derived from statistical mechanics by using a model in which the substrate ketones were immobilely adsorbed.

  7. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Science.gov (United States)

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  8. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  9. Development of West-European PM2.5 and NO2 land use regression models incorporating satellite-derived and chemical transport modelling data

    NARCIS (Netherlands)

    de Hoogh, Kees; Gulliver, John; Donkelaar, Aaron van; Martin, Randall V; Marshall, Julian D; Bechle, Matthew J; Cesaroni, Giulia; Pradas, Marta Cirach; Dedele, Audrius; Eeftens, Marloes|info:eu-repo/dai/nl/315028300; Forsberg, Bertil; Galassi, Claudia; Heinrich, Joachim; Hoffmann, Barbara; Jacquemin, Bénédicte; Katsouyanni, Klea; Korek, Michal; Künzli, Nino; Lindley, Sarah J; Lepeule, Johanna; Meleux, Frederik; de Nazelle, Audrey; Nieuwenhuijsen, Mark; Nystad, Wenche; Raaschou-Nielsen, Ole; Peters, Annette; Peuch, Vincent-Henri; Rouil, Laurence; Udvardy, Orsolya; Slama, Rémy; Stempfelet, Morgane; Stephanou, Euripides G; Tsai, Ming Y; Yli-Tuomi, Tarja; Weinmayr, Gudrun; Brunekreef, Bert|info:eu-repo/dai/nl/067548180; Vienneau, Danielle; Hoek, Gerard|info:eu-repo/dai/nl/069553475

    2016-01-01

    Satellite-derived (SAT) and chemical transport model (CTM) estimates of PM2.5 and NO2 are increasingly used in combination with Land Use Regression (LUR) models. We aimed to compare the contribution of SAT and CTM data to the performance of LUR PM2.5 and NO2 models for Europe. Four sets of models,

  10. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  11. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    Science.gov (United States)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE plant trait information can explain spatial heterogeneity in LUEopt, leading to improved GPP estimates from satellite based LUE models.

  12. Dynamic PET of human liver inflammation: impact of kinetic modeling with optimization-derived dual-blood input function.

    Science.gov (United States)

    Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik

    2018-05-30

    The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018

  13. A derivative-free approach for the estimation of porosity and permeability using time-lapse seismic and production data

    International Nuclear Information System (INIS)

    Dadashpour, Mohsen; Kleppe, Jon; Landrø, Martin; Echeverria Ciaurri, David; Mukerji, Tapan

    2010-01-01

    In this study, we apply a derivative-free optimization algorithm to estimate porosity and permeability from time-lapse seismic data and production data from a real reservoir (Norne field). In some circumstances, obtaining gradient information (exact and/or approximate) can be problematic e.g. derivatives are not available from a commercial simulator, or results are needed within a very short time frame. Derivative-free optimization approaches can be very time consuming because they often require many simulations. Typically, one iteration roughly needs as many simulations as the number of optimization variables. In this work, we propose two ways to significantly increase the efficiency of an optimization methodology in model inversion problems. First, by principal component analysis we decrease the number of optimization variables while keeping geostatistical consistency, and second, noticing that some optimization methods are very amenable to being parallelized, we apply them within a distributed computing framework. If we combine all this, the model inversion approach can be robust, fairly efficient and very simple to implement. In this paper, we apply the methodology to two cases: a semi-synthetic model with noisy data, and a case based entirely on field data. The results show that the derivative-free approach presented is robust against noise in the data

  14. Estimating radiation-induced cancer risk using MVK two-stage model for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, M.; Kusama, T.; Aoki, Y.

    1993-01-01

    Based on the carcinogenesis model as proposed by Moolgavkar et al., time-dependent relative risk models were derived for projecting the time variation in excess relative risk. If it is assumed that each process is described by time-independent linear dose-response relationship, the time variation in excess relative risk is influenced by the parameter related with the promotion process. The risk model based carcinogenesis theory would play a marked role in estimating radiation-induced cancer risk in constructing a projection model or transfer model

  15. Estimation and Properties of a Time-Varying GQARCH(1,1-M Model

    Directory of Open Access Journals (Sweden)

    Sofia Anyfantaki

    2011-01-01

    analysis of these models computationally infeasible. This paper outlines the issues and suggests to employ a Markov chain Monte Carlo algorithm which allows the calculation of a classical estimator via the simulated EM algorithm or a simulated Bayesian solution in only ( computational operations, where is the sample size. Furthermore, the theoretical dynamic properties of a time-varying GQARCH(1,1-M are derived. We discuss them and apply the suggested Bayesian estimation to three major stock markets.

  16. Managing Dog Waste: Campaign Insights from the Health Belief Model

    Science.gov (United States)

    Typhina, Eli; Yan, Changmin

    2014-01-01

    Aiming to help municipalities develop effective education and outreach campaigns to reduce stormwater pollutants, such as pet waste, this study applied the Health Belief Model (HBM) to identify perceptions of dog waste and corresponding collection behaviors from dog owners living in a small U.S. city. Results of 455 online survey responses…

  17. Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model

    Directory of Open Access Journals (Sweden)

    Xue Feng Hu

    2017-01-01

    Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.

  18. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    model derived estimates and GHCN-D observations were assessed using time-series graphs of 2016-2017 winter season SLR observations and climatological estimates, as well as calculating RMSE and variance between estimated and observed values.

  19. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2014-10-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.

  20. A comparison of the performances of an artificial neural network and a regression model for GFR estimation.

    Science.gov (United States)

    Liu, Xun; Li, Ning-shan; Lv, Lin-sheng; Huang, Jian-hua; Tang, Hua; Chen, Jin-xia; Ma, Hui-juan; Wu, Xiao-ming; Lou, Tan-qi

    2013-12-01

    Accurate estimation of glomerular filtration rate (GFR) is important in clinical practice. Current models derived from regression are limited by the imprecision of GFR estimates. We hypothesized that an artificial neural network (ANN) might improve the precision of GFR estimates. A study of diagnostic test accuracy. 1,230 patients with chronic kidney disease were enrolled, including the development cohort (n=581), internal validation cohort (n=278), and external validation cohort (n=371). Estimated GFR (eGFR) using a new ANN model and a new regression model using age, sex, and standardized serum creatinine level derived in the development and internal validation cohort, and the CKD-EPI (Chronic Kidney Disease Epidemiology Collaboration) 2009 creatinine equation. Measured GFR (mGFR). GFR was measured using a diethylenetriaminepentaacetic acid renal dynamic imaging method. Serum creatinine was measured with an enzymatic method traceable to isotope-dilution mass spectrometry. In the external validation cohort, mean mGFR was 49±27 (SD) mL/min/1.73 m2 and biases (median difference between mGFR and eGFR) for the CKD-EPI, new regression, and new ANN models were 0.4, 1.5, and -0.5 mL/min/1.73 m2, respectively (P30% from mGFR) were 50.9%, 77.4%, and 78.7%, respectively (Psource of systematic bias in comparisons of new models to CKD-EPI, and both the derivation and validation cohorts consisted of a group of patients who were referred to the same institution. An ANN model using 3 variables did not perform better than a new regression model. Whether ANN can improve GFR estimation using more variables requires further investigation. Copyright © 2013 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  1. Geostatistical estimation of forest biomass in interior Alaska combining Landsat-derived tree cover, sampled airborne lidar and field observations

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Andersen, Hans-Erik; Pattison, Robert; Cook, Bruce D.; Morton, Douglas C.; Alonzo, Michael; Nelson, Ross; Gregoire, Timothy; Ene, Liviu; Gobakken, Terje; Næsset, Erik

    2018-06-01

    The goal of this research was to develop and examine the performance of a geostatistical coregionalization modeling approach for combining field inventory measurements, strip samples of airborne lidar and Landsat-based remote sensing data products to predict aboveground biomass (AGB) in interior Alaska's Tanana Valley. The proposed modeling strategy facilitates pixel-level mapping of AGB density predictions across the entire spatial domain. Additionally, the coregionalization framework allows for statistically sound estimation of total AGB for arbitrary areal units within the study area---a key advance to support diverse management objectives in interior Alaska. This research focuses on appropriate characterization of prediction uncertainty in the form of posterior predictive coverage intervals and standard deviations. Using the framework detailed here, it is possible to quantify estimation uncertainty for any spatial extent, ranging from pixel-level predictions of AGB density to estimates of AGB stocks for the full domain. The lidar-informed coregionalization models consistently outperformed their counterpart lidar-free models in terms of point-level predictive performance and total AGB precision. Additionally, the inclusion of Landsat-derived forest cover as a covariate further improved estimation precision in regions with lower lidar sampling intensity. Our findings also demonstrate that model-based approaches that do not explicitly account for residual spatial dependence can grossly underestimate uncertainty, resulting in falsely precise estimates of AGB. On the other hand, in a geostatistical setting, residual spatial structure can be modeled within a Bayesian hierarchical framework to obtain statistically defensible assessments of uncertainty for AGB estimates.

  2. A generalized one-factor term structure model and pricing of interest rate derivative securities

    NARCIS (Netherlands)

    Jiang, George J.

    1997-01-01

    The purpose of this paper is to propose a nonparametric interest rate term structure model and investigate its implications on term structure dynamics and prices of interest rate derivative securities. The nonparametric spot interest rate process is estimated from the observed short-term interest

  3. Modeling marbled murrelet (Brachyramphus marmoratus) habitat using LiDAR-derived canopy data

    Science.gov (United States)

    Hagar, Joan C.; Eskelson, Bianca N.I.; Haggerty, Patricia K.; Nelson, S. Kim; Vesely, David G.

    2014-01-01

    LiDAR (Light Detection And Ranging) is an emerging remote-sensing tool that can provide fine-scale data describing vertical complexity of vegetation relevant to species that are responsive to forest structure. We used LiDAR data to estimate occupancy probability for the federally threatened marbled murrelet (Brachyramphus marmoratus) in the Oregon Coast Range of the United States. Our goal was to address the need identified in the Recovery Plan for a more accurate estimate of the availability of nesting habitat by developing occupancy maps based on refined measures of nest-strand structure. We used murrelet occupancy data collected by the Bureau of Land Management Coos Bay District, and canopy metrics calculated from discrete return airborne LiDAR data, to fit a logistic regression model predicting the probability of occupancy. Our final model for stand-level occupancy included distance to coast, and 5 LiDAR-derived variables describing canopy structure. With an area under the curve value (AUC) of 0.74, this model had acceptable discrimination and fair agreement (Cohen's κ = 0.24), especially considering that all sites in our sample were regarded by managers as potential habitat. The LiDAR model provided better discrimination between occupied and unoccupied sites than did a model using variables derived from Gradient Nearest Neighbor maps that were previously reported as important predictors of murrelet occupancy (AUC = 0.64, κ = 0.12). We also evaluated LiDAR metrics at 11 known murrelet nest sites. Two LiDAR-derived variables accurately discriminated nest sites from random sites (average AUC = 0.91). LiDAR provided a means of quantifying 3-dimensional canopy structure with variables that are ecologically relevant to murrelet nesting habitat, and have not been as accurately quantified by other mensuration methods.

  4. Estimation in a multiplicative mixed model involving a genetic relationship matrix

    Directory of Open Access Journals (Sweden)

    Eccleston John A

    2009-04-01

    Full Text Available Abstract Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.

  5. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  6. Applying the Health Belief Model in Explaining the Stages of Exercise Change in Older Adults

    Directory of Open Access Journals (Sweden)

    Sas-Nowosielski Krzysztof

    2016-12-01

    Full Text Available Introduction. The benefits of physical activity (PA have been so well documented that there is no doubt about the significance of PA for personal and social health. Several theoretical models have been proposed with a view to understanding the phenomenon of PA and other health behaviours. The purpose of this study was to evaluate if and how the variables suggested in the Health Belief Model (HBM determine physical activity stages of change in older adults. Material and methods. A total of 172 students of Universities of the Third Age aged 54 to 75 (mean = 62.89 ± 4.83 years agreed to participate in the study, filling out an anonymous survey measuring their stage of exercise change and determinants of health behaviours proposed by the HBM, including: perceived benefits of physical activity, perceived barriers to physical activity, perceived severity of diseases associated with sedentary lifestyle, perceived susceptibility to these diseases, and self-efficacy. Results. The results only partially support the hypothesis that the HBM predicts intentions and behaviours related to the physical activity of older adults. Only two variables were moderately-to-strongly related to stages of exercise change, namely perceived barriers and self-efficacy. Conclusion. Interventions aimed at informing older adults about the benefits of physical activity and the threats associated with sedentary lifestyle can be expected to have rather a weak influence on their readiness for physical activity.

  7. HMM filtering and parameter estimation of an electricity spot price model

    International Nuclear Information System (INIS)

    Erlwein, Christina; Benth, Fred Espen; Mamon, Rogemar

    2010-01-01

    In this paper we develop a model for electricity spot price dynamics. The spot price is assumed to follow an exponential Ornstein-Uhlenbeck (OU) process with an added compound Poisson process. In this way, the model allows for mean-reversion and possible jumps. All parameters are modulated by a hidden Markov chain in discrete time. They are able to switch between different economic regimes representing the interaction of various factors. Through the application of reference probability technique, adaptive filters are derived, which in turn, provide optimal estimates for the state of the Markov chain and related quantities of the observation process. The EM algorithm is applied to find optimal estimates of the model parameters in terms of the recursive filters. We implement this self-calibrating model on a deseasonalised series of daily spot electricity prices from the Nordic exchange Nord Pool. On the basis of one-step ahead forecasts, we found that the model is able to capture the empirical characteristics of Nord Pool spot prices. (author)

  8. A neural network model for estimating soil phosphorus using terrain analysis

    Directory of Open Access Journals (Sweden)

    Ali Keshavarzi

    2015-12-01

    Full Text Available Artificial neural network (ANN model was developed and tested for estimating soil phosphorus (P in Kouhin watershed area (1000 ha, Qazvin province, Iran using terrain analysis. Based on the soil distribution correlation, vegetation growth pattern across the topographically heterogeneous landscape, the topographic and vegetation attributes were used in addition to pedologic information for the development of ANN model in area for estimating of soil phosphorus. Totally, 85 samples were collected and tested for phosphorus contents and corresponding attributes were estimated by the digital elevation model (DEM. In order to develop the pedo-transfer functions, data linearity was checked, correlated and 80% was used for modeling and ANN was tested using 20% of collected data. Results indicate that 68% of the variation in soil phosphorus could be explained by elevation and Band 1 data and significant correlation was observed between input variables and phosphorus contents. There was a significant correlation between soil P and terrain attributes which can be used to derive the pedo-transfer function for soil P estimation to manage nutrient deficiency. Results showed that P values can be calculated more accurately with the ANN-based pedo-transfer function with the input topographic variables along with the Band 1.

  9. Determining the Uncertainties in Prescribed Burn Emissions Through Comparison of Satellite Estimates to Ground-based Estimates and Air Quality Model Evaluations in Southeastern US

    Science.gov (United States)

    Odman, M. T.; Hu, Y.; Russell, A. G.

    2016-12-01

    Prescribed burning is practiced throughout the US, and most widely in the Southeast, for the purpose of maintaining and improving the ecosystem, and reducing the wildfire risk. However, prescribed burn emissions contribute significantly to the of trace gas and particulate matter loads in the atmosphere. In places where air quality is already stressed by other anthropogenic emissions, prescribed burns can lead to major health and environmental problems. Air quality modeling efforts are under way to assess the impacts of prescribed burn emissions. Operational forecasts of the impacts are also emerging for use in dynamic management of air quality as well as the burns. Unfortunately, large uncertainties exist in the process of estimating prescribed burn emissions and these uncertainties limit the accuracy of the burn impact predictions. Prescribed burn emissions are estimated by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels, their consumption amounts, and the progression of the fire, ground-based estimates are more accurate. In the absence of such information satellites remain as the only reliable source for emission estimation. To determine the level of uncertainty in prescribed burn emissions, we compared estimates derived from a burn permit database and other ground-based information to the estimates by the Biomass Burning Emissions Product derived from a constellation of NOAA and NASA satellites. Using these emissions estimates we conducted simulations with the Community Multiscale Air Quality (CMAQ) model and predicted trace gas and particulate matter concentrations throughout the Southeast for two consecutive burn seasons (2015 and 2016). In this presentation, we will compare model predicted concentrations to measurements at monitoring stations and evaluate if the differences are commensurate with our emission uncertainty estimates. We will also investigate if

  10. Testicular Self-Examination: A Test of the Health Belief Model and the Theory of Planned Behaviour

    Science.gov (United States)

    McClenahan, Carol; Shevlin, Mark; Adamson, Gary; Bennett, Cara; O'Neill, Brenda

    2007-01-01

    The aim of this study was to test the utility and efficiency of the theory of planned behaviour (TPB) and the health belief model (HBM) in predicting testicular self-examination (TSE) behaviour. A questionnaire was administered to an opportunistic sample of 195 undergraduates aged 18-39 years. Structural equation modelling indicated that, on the…

  11. Evaluation of 6 and 10 Year-Old Child Human Body Models in Emergency Events.

    Science.gov (United States)

    Gras, Laure-Lise; Stockman, Isabelle; Brolin, Karin

    2017-01-01

    Emergency events can influence a child's kinematics prior to a car-crash, and thus its interaction with the restraint system. Numerical Human Body Models (HBMs) can help understand the behaviour of children in emergency events. The kinematic responses of two child HBMs-MADYMO 6 and 10 year-old models-were evaluated and compared with child volunteers' data during emergency events-braking and steering-with a focus on the forehead and sternum displacements. The response of the 6 year-old HBM was similar to the response of the 10 year-old HBM, however both models had a different response compared with the volunteers. The forward and lateral displacements were within the range of volunteer data up to approximately 0.3 s; but then, the HBMs head and sternum moved significantly downwards, while the volunteers experienced smaller displacement and tended to come back to their initial posture. Therefore, these HBMs, originally intended for crash simulations, are not too stiff and could be able to reproduce properly emergency events thanks, for instance, to postural control.

  12. An Online SOC and SOH Estimation Model for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Shyh-Chin Huang

    2017-04-01

    Full Text Available The monitoring and prognosis of cell degradation in lithium-ion (Li-ion batteries are essential for assuring the reliability and safety of electric and hybrid vehicles. This paper aims to develop a reliable and accurate model for online, simultaneous state-of-charge (SOC and state-of-health (SOH estimations of Li-ion batteries. Through the analysis of battery cycle-life test data, the instantaneous discharging voltage (V and its unit time voltage drop, V′, are proposed as the model parameters for the SOC equation. The SOH equation is found to have a linear relationship with 1/V′ times the modification factor, which is a function of SOC. Four batteries are tested in the laboratory, and the data are regressed for the model coefficients. The results show that the model built upon the data from one single cell is able to estimate the SOC and SOH of the three other cells within a 5% error bound. The derived model is also proven to be robust. A random sampling test to simulate the online real-time SOC and SOH estimation proves that this model is accurate and can be potentially used in an electric vehicle battery management system (BMS.

  13. Principles of parametric estimation in modeling language competition.

    Science.gov (United States)

    Zhang, Menghan; Gong, Tao

    2013-06-11

    It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.

  14. Modeling extracellular electrical stimulation: I. Derivation and interpretation of neurite equations.

    Science.gov (United States)

    Meffin, Hamish; Tahayori, Bahman; Grayden, David B; Burkitt, Anthony N

    2012-12-01

    Neuroprosthetic devices, such as cochlear and retinal implants, work by directly stimulating neurons with extracellular electrodes. This is commonly modeled using the cable equation with an applied extracellular voltage. In this paper a framework for modeling extracellular electrical stimulation is presented. To this end, a cylindrical neurite with confined extracellular space in the subthreshold regime is modeled in three-dimensional space. Through cylindrical harmonic expansion of Laplace's equation, we derive the spatio-temporal equations governing different modes of stimulation, referred to as longitudinal and transverse modes, under types of boundary conditions. The longitudinal mode is described by the well-known cable equation, however, the transverse modes are described by a novel ordinary differential equation. For the longitudinal mode, we find that different electrotonic length constants apply under the two different boundary conditions. Equations connecting current density to voltage boundary conditions are derived that are used to calculate the trans-impedance of the neurite-plus-thin-extracellular-sheath. A detailed explanation on depolarization mechanisms and the dominant current pathway under different modes of stimulation is provided. The analytic results derived here enable the estimation of a neurite's membrane potential under extracellular stimulation, hence bypassing the heavy computational cost of using numerical methods.

  15. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    Science.gov (United States)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  16. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    Science.gov (United States)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  17. Projected metastable Markov processes and their estimation with observable operator models

    International Nuclear Information System (INIS)

    Wu, Hao; Prinz, Jan-Hendrik; Noé, Frank

    2015-01-01

    The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning

  18. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  19. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    Science.gov (United States)

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and

  20. The estimation of time-varying risks in asset pricing modelling using B-Spline method

    Science.gov (United States)

    Nurjannah; Solimun; Rinaldo, Adji

    2017-12-01

    Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.

  1. Nonparametric estimation of transition probabilities in the non-Markov illness-death model: A comparative study.

    Science.gov (United States)

    de Uña-Álvarez, Jacobo; Meira-Machado, Luís

    2015-06-01

    Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed. © 2015, The International Biometric Society.

  2. INCLUSION RATIO BASED ESTIMATOR FOR THE MEAN LENGTH OF THE BOOLEAN LINE SEGMENT MODEL WITH AN APPLICATION TO NANOCRYSTALLINE CELLULOSE

    Directory of Open Access Journals (Sweden)

    Mikko Niilo-Rämä

    2014-06-01

    Full Text Available A novel estimator for estimating the mean length of fibres is proposed for censored data observed in square shaped windows. Instead of observing the fibre lengths, we observe the ratio between the intensity estimates of minus-sampling and plus-sampling. It is well-known that both intensity estimators are biased. In the current work, we derive the ratio of these biases as a function of the mean length assuming a Boolean line segment model with exponentially distributed lengths and uniformly distributed directions. Having the observed ratio of the intensity estimators, the inverse of the derived function is suggested as a new estimator for the mean length. For this estimator, an approximation of its variance is derived. The accuracies of the approximations are evaluated by means of simulation experiments. The novel method is compared to other methods and applied to real-world industrial data from nanocellulose crystalline.

  3. Model for estimating air pollutant uptake by forests: calculation of forest absorption of sulfur dioxide from dispersed sources

    International Nuclear Information System (INIS)

    Murphy, C.E. Jr.; Sinclair, T.R.; Knoerr, K.R.

    1975-01-01

    The computer model presented in this paper is designed to estimate the uptake of air pollutants by forests. The model utilizes submodels to describe atmospheric diffusion immediately above and within the canopy, and into the sink areas within or on the trees. The program implementing the model is general and can be used with only minor changes for any gaseous pollutant. To illustrate the utility of the model, estimates are made of the sink strength of forests for sulfur dioxide. The results agree with experimentally derived estimates of sulfur dioxide uptake in crops and forest trees. (auth)

  4. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  5. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil

  6. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    Science.gov (United States)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  7. Kriging and local polynomial methods for blending satellite-derived and gauge precipitation estimates to support hydrologic early warning systems

    Science.gov (United States)

    Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William

    2016-01-01

    Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.

  8. The Effect of Health Education based on Health Belief Model on Preventive Actions of Synthetic Drugs Dependence in Male Students of Kerman, Iran

    Directory of Open Access Journals (Sweden)

    Seyed Saeed Mazloomy Mahmoodabad

    2017-06-01

    Conclusion: Findings indicated that by increase of HBM components' average scores, the average score of synthetic drug dependence preventive actions increased too. Therefore, results of the research confirm the effect and efficiency of HBM in making preventive actions of drug dependence. 

  9. Global NOx emission estimates derived from an assimilation of OMI tropospheric NO2 columns

    Directory of Open Access Journals (Sweden)

    K. Sudo

    2012-03-01

    Full Text Available A data assimilation system has been developed to estimate global nitrogen oxides (NOx emissions using OMI tropospheric NO2 columns (DOMINO product and a global chemical transport model (CTM, the Chemical Atmospheric GCM for Study of Atmospheric Environment and Radiative Forcing (CHASER. The data assimilation system, based on an ensemble Kalman filter approach, was applied to optimize daily NOx emissions with a horizontal resolution of 2.8° during the years 2005 and 2006. The background error covariance estimated from the ensemble CTM forecasts explicitly represents non-direct relationships between the emissions and tropospheric columns caused by atmospheric transport and chemical processes. In comparison to the a priori emissions based on bottom-up inventories, the optimized emissions were higher over eastern China, the eastern United States, southern Africa, and central-western Europe, suggesting that the anthropogenic emissions are mostly underestimated in the inventories. In addition, the seasonality of the estimated emissions differed from that of the a priori emission over several biomass burning regions, with a large increase over Southeast Asia in April and over South America in October. The data assimilation results were validated against independent data: SCIAMACHY tropospheric NO2 columns and vertical NO2 profiles obtained from aircraft and lidar measurements. The emission correction greatly improved the agreement between the simulated and observed NO2 fields; this implies that the data assimilation system efficiently derives NOx emissions from concentration observations. We also demonstrated that biases in the satellite retrieval and model settings used in the data assimilation largely affect the magnitude of estimated emissions. These dependences should be carefully considered for better understanding NOx sources from top-down approaches.

  10. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  11. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  12. SU-E-J-01: 3D Fluoroscopic Image Estimation From Patient-Specific 4DCBCT-Based Motion Models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Lewis, J; Mishra, P

    2014-01-01

    Purpose: 3D motion modeling derived from 4DCT images, taken days or weeks before treatment, cannot reliably represent patient anatomy on the day of treatment. We develop a method to generate motion models based on 4DCBCT acquired at the time of treatment, and apply the model to estimate 3D time-varying images (referred to as 3D fluoroscopic images). Methods: Motion models are derived through deformable registration between each 4DCBCT phase, and principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated based on cone-beam projections simulating kV treatment imaging. PCA coefficients are optimized iteratively through comparison of these cone-beam projections and projections estimated based on the motion model. Digital phantoms reproducing ten patient motion trajectories, and a physical phantom with regular and irregular motion derived from measured patient trajectories, are used to evaluate the method in terms of tumor localization, and the global voxel intensity difference compared to ground truth. Results: Experiments included: 1) assuming no anatomic or positioning changes between 4DCT and treatment time; and 2) simulating positioning and tumor baseline shifts at the time of treatment compared to 4DCT acquisition. 4DCBCT were reconstructed from the anatomy as seen at treatment time. In case 1) the tumor localization error and the intensity differences in ten patient were smaller using 4DCT-based motion model, possible due to superior image quality. In case 2) the tumor localization error and intensity differences were 2.85 and 0.15 respectively, using 4DCT-based motion models, and 1.17 and 0.10 using 4DCBCT-based models. 4DCBCT performed better due to its ability to reproduce daily anatomical changes. Conclusion: The study showed an advantage of 4DCBCT-based motion models in the context of 3D fluoroscopic images estimation. Positioning and tumor baseline shift uncertainties were mitigated by the 4DCBCT

  13. Estimating the effects of 17α-ethinylestradiol on stochastic population growth rate of fathead minnows: a population synthesis of empirically derived vital rates

    Science.gov (United States)

    Schwindt, Adam R.; Winkelman, Dana L.

    2016-01-01

    Urban freshwater streams in arid climates are wastewater effluent dominated ecosystems particularly impacted by bioactive chemicals including steroid estrogens that disrupt vertebrate reproduction. However, more understanding of the population and ecological consequences of exposure to wastewater effluent is needed. We used empirically derived vital rate estimates from a mesocosm study to develop a stochastic stage-structured population model and evaluated the effect of 17α-ethinylestradiol (EE2), the estrogen in human contraceptive pills, on fathead minnow Pimephales promelas stochastic population growth rate. Tested EE2 concentrations ranged from 3.2 to 10.9 ng L−1 and produced stochastic population growth rates (λ S ) below 1 at the lowest concentration, indicating potential for population decline. Declines in λ S compared to controls were evident in treatments that were lethal to adult males despite statistically insignificant effects on egg production and juvenile recruitment. In fact, results indicated that λ S was most sensitive to the survival of juveniles and female egg production. More broadly, our results document that population model results may differ even when empirically derived estimates of vital rates are similar among experimental treatments, and demonstrate how population models integrate and project the effects of stressors throughout the life cycle. Thus, stochastic population models can more effectively evaluate the ecological consequences of experimentally derived vital rates.

  14. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  15. Mathematical modeling of tetrahydroimidazole benzodiazepine-1-one derivatives as an anti HIV agent

    Science.gov (United States)

    Ojha, Lokendra Kumar

    2017-07-01

    The goal of the present work is the study of drug receptor interaction via QSAR (Quantitative Structure-Activity Relationship) analysis for 89 set of TIBO (Tetrahydroimidazole Benzodiazepine-1-one) derivatives. MLR (Multiple Linear Regression) method is utilized to generate predictive models of quantitative structure-activity relationships between a set of molecular descriptors and biological activity (IC50). The best QSAR model was selected having a correlation coefficient (r) of 0.9299 and Standard Error of Estimation (SEE) of 0.5022, Fisher Ratio (F) of 159.822 and Quality factor (Q) of 1.852. This model is statistically significant and strongly favours the substitution of sulphur atom, IS i.e. indicator parameter for -Z position of the TIBO derivatives. Two other parameter logP (octanol-water partition coefficient) and SAG (Surface Area Grid) also played a vital role in the generation of best QSAR model. All three descriptor shows very good stability towards data variation in leave-one-out (LOO).

  16. Differences in Gaussian diffusion tensor imaging and non-Gaussian diffusion kurtosis imaging model-based estimates of diffusion tensor invariants in the human brain.

    Science.gov (United States)

    Lanzafame, S; Giannelli, M; Garaci, F; Floris, R; Duggento, A; Guerrisi, M; Toschi, N

    2016-05-01

    An increasing number of studies have aimed to compare diffusion tensor imaging (DTI)-related parameters [e.g., mean diffusivity (MD), fractional anisotropy (FA), radial diffusivity (RD), and axial diffusivity (AD)] to complementary new indexes [e.g., mean kurtosis (MK)/radial kurtosis (RK)/axial kurtosis (AK)] derived through diffusion kurtosis imaging (DKI) in terms of their discriminative potential about tissue disease-related microstructural alterations. Given that the DTI and DKI models provide conceptually and quantitatively different estimates of the diffusion tensor, which can also depend on fitting routine, the aim of this study was to investigate model- and algorithm-dependent differences in MD/FA/RD/AD and anisotropy mode (MO) estimates in diffusion-weighted imaging of human brain white matter. The authors employed (a) data collected from 33 healthy subjects (20-59 yr, F: 15, M: 18) within the Human Connectome Project (HCP) on a customized 3 T scanner, and (b) data from 34 healthy subjects (26-61 yr, F: 5, M: 29) acquired on a clinical 3 T scanner. The DTI model was fitted to b-value =0 and b-value =1000 s/mm(2) data while the DKI model was fitted to data comprising b-value =0, 1000 and 3000/2500 s/mm(2) [for dataset (a)/(b), respectively] through nonlinear and weighted linear least squares algorithms. In addition to MK/RK/AK maps, MD/FA/MO/RD/AD maps were estimated from both models and both algorithms. Using tract-based spatial statistics, the authors tested the null hypothesis of zero difference between the two MD/FA/MO/RD/AD estimates in brain white matter for both datasets and both algorithms. DKI-derived MD/FA/RD/AD and MO estimates were significantly higher and lower, respectively, than corresponding DTI-derived estimates. All voxelwise differences extended over most of the white matter skeleton. Fractional differences between the two estimates [(DKI - DTI)/DTI] of most invariants were seen to vary with the invariant value itself as well as with MK

  17. Improving evapotranspiration in a land surface model using biophysical variables derived from MSG/SEVIRI satellite

    Directory of Open Access Journals (Sweden)

    N. Ghilain

    2012-08-01

    Full Text Available Monitoring evapotranspiration over land is highly dependent on the surface state and vegetation dynamics. Data from spaceborn platforms are desirable to complement estimations from land surface models. The success of daily evapotranspiration monitoring at continental scale relies on the availability, quality and continuity of such data. The biophysical variables derived from SEVIRI on board the geostationary satellite Meteosat Second Generation (MSG and distributed by the Satellite Application Facility on Land surface Analysis (LSA-SAF are particularly interesting for such applications, as they aimed at providing continuous and consistent daily time series in near-real time over Africa, Europe and South America. In this paper, we compare them to monthly vegetation parameters from a database commonly used in numerical weather predictions (ECOCLIMAP-I, showing the benefits of the new daily products in detecting the spatial and temporal (seasonal and inter-annual variability of the vegetation, especially relevant over Africa. We propose a method to handle Leaf Area Index (LAI and Fractional Vegetation Cover (FVC products for evapotranspiration monitoring with a land surface model at 3–5 km spatial resolution. The method is conceived to be applicable for near-real time processes at continental scale and relies on the use of a land cover map. We assess the impact of using LSA-SAF biophysical variables compared to ECOCLIMAP-I on evapotranspiration estimated by the land surface model H-TESSEL. Comparison with in-situ observations in Europe and Africa shows an improved estimation of the evapotranspiration, especially in semi-arid climates. Finally, the impact on the land surface modelled evapotranspiration is compared over a north–south transect with a large gradient of vegetation and climate in Western Africa using LSA-SAF radiation forcing derived from remote sensing. Differences are highlighted. An evaluation against remote sensing derived land

  18. Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions

    Directory of Open Access Journals (Sweden)

    uma srivastava

    2012-01-01

    Full Text Available The paper deals with estimating  shift point which occurs in any sequence of independent observations  of Poisson model in statistical process control. This shift point occurs in the sequence when  i.e. m  life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with  R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .

  19. Inflationary models with non-minimally derivative coupling

    International Nuclear Information System (INIS)

    Yang, Nan; Fei, Qin; Gong, Yungui; Gao, Qing

    2016-01-01

    We derive the general formulae for the scalar and tensor spectral tilts to the second order for the inflationary models with non-minimally derivative coupling without taking the high friction limit. The non-minimally kinetic coupling to Einstein tensor brings the energy scale in the inflationary models down to be sub-Planckian. In the high friction limit, the Lyth bound is modified with an extra suppression factor, so that the field excursion of the inflaton is sub-Planckian. The inflationary models with non-minimally derivative coupling are more consistent with observations in the high friction limit. In particular, with the help of the non-minimally derivative coupling, the quartic power law potential is consistent with the observational constraint at 95% CL. (paper)

  20. Analyzing dynamic fault trees derived from model-based system architectures

    International Nuclear Information System (INIS)

    Dehlinger, Josh; Dugan, Joanne Bechta

    2008-01-01

    Dependability-critical systems, such as digital instrumentation and control systems in nuclear power plants, necessitate engineering techniques and tools to provide assurances of their safety and reliability. Determining system reliability at the architectural design phase is important since it may guide design decisions and provide crucial information for trade-off analysis and estimating system cost. Despite this, reliability and system engineering remain separate disciplines and engineering processes by which the dependability analysis results may not represent the designed system. In this article we provide an overview and application of our approach to build architecture-based, dynamic system models for dependability-critical systems and then automatically generate Dynamic Fault Trees (DFT) for comprehensive, toolsupported reliability analysis. Specifically, we use the Architectural Analysis and Design Language (AADL) to model the structural, behavioral and failure aspects of the system in a composite architecture model. From the AADL model, we seek to derive the DFT(s) and use Galileo's automated reliability analyses to estimate system reliability. This approach alleviates the dependability engineering - systems engineering knowledge expertise gap, integrates the dependability and system engineering design and development processes and enables a more formal, automated and consistent DFT construction. We illustrate this work using an example based on a dynamic digital feed-water control system for a nuclear reactor

  1. Flood extent and water level estimation from SAR using data-model integration

    Science.gov (United States)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  2. A procedure for estimating site specific derived limits for the discharge of radioactive material to the atmosphere

    CERN Document Server

    Hallam, J; Jones, J A

    1983-01-01

    Generalised Derived Limits (GDLs) for the discharge of radioactive material to the atmosphere are evaluated using parameter values to ensure that the exposure of the critical group is unlikely to be underestimated significantly. Where the discharge is greater than about 5% of the GDL, a more rigorous estimate of the derived limit may be warranted. This report describes a procedure for estimating site specific derived limits for discharges of radioactivity to the atmosphere taking into account the conditions of the release and the location and habits of the exposed population. A worksheet is provided to assist in carrying out the required calculations.

  3. Cross-property relations and permeability estimation in model porous media

    International Nuclear Information System (INIS)

    Schwartz, L.M.; Martys, N.; Bentz, D.P.; Garboczi, E.J.; Torquato, S.

    1993-01-01

    Results from a numerical study examining cross-property relations linking fluid permeability to diffusive and electrical properties are presented. Numerical solutions of the Stokes equations in three-dimensional consolidated granular packings are employed to provide a basis of comparison between different permeability estimates. Estimates based on the Λ parameter (a length derived from electrical conduction) and on d c (a length derived from immiscible displacement) are found to be considerably more reliable than estimates based on rigorous permeability bounds related to pore space diffusion. We propose two hybrid relations based on diffusion which provide more accurate estimates than either of the rigorous permeability bounds

  4. Parameter estimations in predictive microbiology: Statistically sound modelling of the microbial growth rate.

    Science.gov (United States)

    Akkermans, Simen; Logist, Filip; Van Impe, Jan F

    2018-04-01

    When building models to describe the effect of environmental conditions on the microbial growth rate, parameter estimations can be performed either with a one-step method, i.e., directly on the cell density measurements, or in a two-step method, i.e., via the estimated growth rates. The two-step method is often preferred due to its simplicity. The current research demonstrates that the two-step method is, however, only valid if the correct data transformation is applied and a strict experimental protocol is followed for all experiments. Based on a simulation study and a mathematical derivation, it was demonstrated that the logarithm of the growth rate should be used as a variance stabilizing transformation. Moreover, the one-step method leads to a more accurate estimation of the model parameters and a better approximation of the confidence intervals on the estimated parameters. Therefore, the one-step method is preferred and the two-step method should be avoided. Copyright © 2017. Published by Elsevier Ltd.

  5. Societal and ethical issues in human biomonitoring – a view from science studies

    Directory of Open Access Journals (Sweden)

    Bauer Susanne

    2008-01-01

    Full Text Available Abstract Background Human biomonitoring (HBM has rapidly gained importance. In some epidemiological studies, the measurement and use of biomarkers of exposure, susceptibility and disease have replaced traditional environmental indicators. While in HBM, ethical issues have mostly been addressed in terms of informed consent and confidentiality, this paper maps out a larger array of societal issues from an epistemological perspective, i.e. bringing into focus the conditions of how and what is known in environmental health science. Methods In order to analyse the effects of HBM and the shift towards biomarker research in the assessment of environmental pollution in a broader societal context, selected analytical frameworks of science studies are introduced. To develop the epistemological perspective, concepts from "biomedical platform sociology" and the notion of "epistemic cultures" and "thought styles" are applied to the research infrastructures of HBM. Further, concepts of "biocitizenship" and "civic epistemologies" are drawn upon as analytical tools to discuss the visions and promises of HBM as well as related ethical problematisations. Results In human biomonitoring, two different epistemological cultures meet; these are environmental science with for instance pollution surveys and toxicological assessments on the one hand, and analytical epidemiology investigating the association between exposure and disease in probabilistic risk estimation on the other hand. The surveillance of exposure and dose via biomarkers as envisioned in HBM is shifting the site of exposure monitoring to the human body. Establishing an HBM platform faces not only the need to consider individual decision autonomy as an ethics issue, but also larger epistemological and societal questions, such as the mode of evidence demanded in science, policy and regulation. Conclusion The shift of exposure monitoring towards the biosurveillance of human populations involves fundamental

  6. A framework for estimating health state utility values within a discrete choice experiment: modeling risky choices.

    Science.gov (United States)

    Robinson, Angela; Spencer, Anne; Moffatt, Peter

    2015-04-01

    There has been recent interest in using the discrete choice experiment (DCE) method to derive health state utilities for use in quality-adjusted life year (QALY) calculations, but challenges remain. We set out to develop a risk-based DCE approach to derive utility values for health states that allowed 1) utility values to be anchored directly to normal health and death and 2) worse than dead health states to be assessed in the same manner as better than dead states. Furthermore, we set out to estimate alternative models of risky choice within a DCE model. A survey was designed that incorporated a risk-based DCE and a "modified" standard gamble (SG). Health state utility values were elicited for 3 EQ-5D health states assuming "standard" expected utility (EU) preferences. The DCE model was then generalized to allow for rank-dependent expected utility (RDU) preferences, thereby allowing for probability weighting. A convenience sample of 60 students was recruited and data collected in small groups. Under the assumption of "standard" EU preferences, the utility values derived within the DCE corresponded fairly closely to the mean results from the modified SG. Under the assumption of RDU preferences, the utility values estimated are somewhat lower than under the assumption of standard EU, suggesting that the latter may be biased upward. Applying the correct model of risky choice is important whether a modified SG or a risk-based DCE is deployed. It is, however, possible to estimate a probability weighting function within a DCE and estimate "unbiased" utility values directly, which is not possible within a modified SG. We conclude by setting out the relative strengths and weaknesses of the 2 approaches in this context. © The Author(s) 2014.

  7. Application of the health belief model and social cognitive theory for osteoporosis preventive nutritional behaviors in a sample of Iranian women.

    Science.gov (United States)

    Jeihooni, Ali Khani; Hidarnia, Alireza; Kaveh, Mohammad Hossein; Hajizadeh, Ebrahim; Askari, Alireza

    2016-01-01

    Osteoporosis is the most common metabolic bone disease. The purpose of this study is to investigate the health belief model (HBM) and social cognitive theory (SCT) for osteoporosis preventive nutritional behaviors in women. In this quasi-experimental study, 120 patients who were women and registered under the health centers in Fasa City, Fars Province, Iran were selected. A questionnaire consisting of HBM constructs and the constructs of self-regulation and social support from SCT was used to measure nutrition performance. Bone mineral density was recorded at the lumbar spine and femur. The intervention for the experimental group included 10 educational sessions of 55-60 min of speech, group discussion, questions and answers, as well as posters and educational pamphlets, film screenings, and PowerPoint displays. Data were analyzed using SPSS 19 via Chi-square test, independent t-test, and repeated measures analysis of variance (ANOVA) at a significance level of 0.05. After intervention, the experimental group showed a significant increase in the HBM constructs, self-regulation, social support, and nutrition performance, compared to the control group. Six months after the intervention, the value of lumbar spine bone mineral density (BMD) T-score increased to 0.127 in the experimental group, while it reduced to -0.043 in the control group. The value of the hip BMD T-score increased to 0.125 in the intervention group, but it decreased to -0.028 in the control group. This study showed the effectiveness of HBM and constructs of self-regulation and social support on adoption of nutrition behaviors and increase in the bone density to prevent osteoporosis.

  8. Use of modeled and satelite soil moisture to estimate soil erosion in central and southern Italy.

    Science.gov (United States)

    Termite, Loris Francesco; Massari, Christian; Todisco, Francesca; Brocca, Luca; Ferro, Vito; Bagarello, Vincenzo; Pampalone, Vincenzo; Wagner, Wolfgang

    2016-04-01

    This study presents an accurate comparison between two different approaches aimed to enhance accuracy of the Universal Soil Loss Equation (USLE) in estimating the soil loss at the single event time scale. Indeed it is well known that including the observed event runoff in the USLE improves its soil loss estimation ability at the event scale. In particular, the USLE-M and USLE-MM models use the observed runoff coefficient to correct the rainfall erosivity factor. In the first case, the soil loss is linearly dependent on rainfall erosivity, in the second case soil loss and erosivity are related by a power law. However, the measurement of the event runoff is not straightforward or, in some cases, possible. For this reason, the first approach used in this study is the use of Soil Moisture For Erosion (SM4E), a recent USLE-derived model in which the event runoff is replaced by the antecedent soil moisture. Three kinds of soil moisture datasets have been separately used: the ERA-Interim/Land reanalysis data of the European Centre for Medium-range Weather Forecasts (ECMWF); satellite retrievals from the European Space Agency - Climate Change Initiative (ESA-CCI); modeled data using a Soil Water Balance Model (SWBM). The second approach is the use of an estimated runoff rather than the observed. Specifically, the Simplified Continuous Rainfall-Runoff Model (SCRRM) is used to derive the runoff estimates. SCRMM requires soil moisture data as input and at this aim the same three soil moisture datasets used for the SM4E have been separately used. All the examined models have been calibrated and tested at the plot scale, using data from the experimental stations for the monitoring of the erosive processes "Masse" (Central Italy) and "Sparacia" (Southern Italy). Climatic data and runoff and soil loss measures at the event time scale are available for the period 2008-2013 at Masse and for the period 2002-2013 at Sparacia. The results show that both the approaches can provide

  9. Uncertainty Estimate of Surface Irradiances Computed with MODIS-, CALIPSO-, and CloudSat-Derived Cloud and Aerosol Properties

    Science.gov (United States)

    Kato, Seiji; Loeb, Norman G.; Rutan, David A.; Rose, Fred G.; Sun-Mack, Sunny; Miller, Walter F.; Chen, Yan

    2012-07-01

    Differences of modeled surface upward and downward longwave and shortwave irradiances are calculated using modeled irradiance computed with active sensor-derived and passive sensor-derived cloud and aerosol properties. The irradiance differences are calculated for various temporal and spatial scales, monthly gridded, monthly zonal, monthly global, and annual global. Using the irradiance differences, the uncertainty of surface irradiances is estimated. The uncertainty (1σ) of the annual global surface downward longwave and shortwave is, respectively, 7 W m-2 (out of 345 W m-2) and 4 W m-2 (out of 192 W m-2), after known bias errors are removed. Similarly, the uncertainty of the annual global surface upward longwave and shortwave is, respectively, 3 W m-2 (out of 398 W m-2) and 3 W m-2 (out of 23 W m-2). The uncertainty is for modeled irradiances computed using cloud properties derived from imagers on a sun-synchronous orbit that covers the globe every day (e.g., moderate-resolution imaging spectrometer) or modeled irradiances computed for nadir view only active sensors on a sun-synchronous orbit such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation and CloudSat. If we assume that longwave and shortwave uncertainties are independent of each other, but up- and downward components are correlated with each other, the uncertainty in global annual mean net surface irradiance is 12 W m-2. One-sigma uncertainty bounds of the satellite-based net surface irradiance are 106 W m-2 and 130 W m-2.

  10. Mean atmospheric temperature model estimation for GNSS meteorology using AIRS and AMSU data

    Directory of Open Access Journals (Sweden)

    Rata Suwantong

    2017-03-01

    Full Text Available In this paper, the problem of modeling the relationship between the mean atmospheric and air surface temperatures is addressed. Particularly, the major goal is to estimate the model parameters at a regional scale in Thailand. To formulate the relationship between the mean atmospheric and air surface temperatures, a triply modulated cosine function was adopted to model the surface temperature as a periodic function. The surface temperature was then converted to mean atmospheric temperature using a linear function. The parameters of the model were estimated using an extended Kalman filter. Traditionally, radiosonde data is used. In this paper, satellite data from an atmospheric infrared sounder, and advanced microwave sounding unit sensors was used because it is open source data and has global coverage with high temporal resolution. The performance of the proposed model was tested against that of a global model via an accuracy assessment of the computed GNSS-derived PWV.

  11. Assimilation of SMOS-derived soil moisture in a fully integrated hydrological and soil-vegetation-atmosphere transfer model in Western Denmark

    DEFF Research Database (Denmark)

    Ridler, Marc-Etienne Francois; Madsen, Henrik; Stisen, Simon

    2014-01-01

    -derived soil moisture assimilation in a catchment scale model is typically restricted by two challenges: (1) passive microwave is too coarse for direct assimilation and (2) the data tend to be biased. The solution proposed in this study is to disaggregate the SMOS bias using a higher resolution land cover...... classification map that was derived from Landsat thermal images. Using known correlations between SMOS bias and vegetation type, the assimilation filter is adapted to calculate biases online, using an initial bias estimate. Real SMOS-derived soil moisture is assimilated in a precalibrated catchment model...

  12. Cost function approach for estimating derived demand for composite wood products

    Science.gov (United States)

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  13. Development and prospective validation of a model estimating risk of readmission in cancer patients.

    Science.gov (United States)

    Schmidt, Carl R; Hefner, Jennifer; McAlearney, Ann S; Graham, Lisa; Johnson, Kristen; Moffatt-Bruce, Susan; Huerta, Timothy; Pawlik, Timothy M; White, Susan

    2018-02-26

    Hospital readmissions among cancer patients are common. While several models estimating readmission risk exist, models specific for cancer patients are lacking. A logistic regression model estimating risk of unplanned 30-day readmission was developed using inpatient admission data from a 2-year period (n = 18 782) at a tertiary cancer hospital. Readmission risk estimates derived from the model were then calculated prospectively over a 10-month period (n = 8616 admissions) and compared with actual incidence of readmission. There were 2478 (13.2%) unplanned readmissions. Model factors associated with readmission included: emergency department visit within 30 days, >1 admission within 60 days, non-surgical admission, solid malignancy, gastrointestinal cancer, emergency admission, length of stay >5 days, abnormal sodium, hemoglobin, or white blood cell count. The c-statistic for the model was 0.70. During the 10-month prospective evaluation, estimates of readmission from the model were associated with higher actual readmission incidence from 20.7% for the highest risk category to 9.6% for the lowest. An unplanned readmission risk model developed specifically for cancer patients performs well when validated prospectively. The specificity of the model for cancer patients, EMR incorporation, and prospective validation justify use of the model in future studies designed to reduce and prevent readmissions. © 2018 Wiley Periodicals, Inc.

  14. Model parameters estimation and sensitivity by genetic algorithms

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Podofillini, Luca

    2003-01-01

    In this paper we illustrate the possibility of extracting qualitative information on the importance of the parameters of a model in the course of a Genetic Algorithms (GAs) optimization procedure for the estimation of such parameters. The Genetic Algorithms' search of the optimal solution is performed according to procedures that resemble those of natural selection and genetics: an initial population of alternative solutions evolves within the search space through the four fundamental operations of parent selection, crossover, replacement, and mutation. During the search, the algorithm examines a large amount of solution points which possibly carries relevant information on the underlying model characteristics. A possible utilization of this information amounts to create and update an archive with the set of best solutions found at each generation and then to analyze the evolution of the statistics of the archive along the successive generations. From this analysis one can retrieve information regarding the speed of convergence and stabilization of the different control (decision) variables of the optimization problem. In this work we analyze the evolution strategy followed by a GA in its search for the optimal solution with the aim of extracting information on the importance of the control (decision) variables of the optimization with respect to the sensitivity of the objective function. The study refers to a GA search for optimal estimates of the effective parameters in a lumped nuclear reactor model of literature. The supporting observation is that, as most optimization procedures do, the GA search evolves towards convergence in such a way to stabilize first the most important parameters of the model and later those which influence little the model outputs. In this sense, besides estimating efficiently the parameters values, the optimization approach also allows us to provide a qualitative ranking of their importance in contributing to the model output. The

  15. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    Science.gov (United States)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  16. CREDIT SCORING MODELS IN ESTIMATING THE CREDITWORTHINESS OF SMALL AND MEDIUM AND BIG ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Robert Zenzerović

    2011-02-01

    Full Text Available This paper is focused on estimating the credit scoring models for companies operating in the Republic of Croatia. According to level of economic and legal development, especially in the area of bankruptcy regulation as well as business ethics in the Republic of Croatia, the models derived can be applied in wider region particularly in South-eastern European countries that twenty years ago transferred from state directed to free market economy. The purpose of this paper is to emphasize the relevance and possibilities of particular financial ratios in estimating the creditworthiness of business entities what was realized by performing the research among 110 companies. Along most commonly used research methods of description, analysis and synthesis, induction, deduction and surveys, the mathematical and statistical logistic regression method took the central part in this research. The designed sample of 110 business entities represented the structure of firms operating in Republic of Croatia according to their activities as well as to their size. The sample was divided in two sub samples where the first one consist of small and medium enterprises (SME and the second one consist of big business entities. In the next phase the logistic regression method was applied on the 50 independent variables – financial ratios calculated for each sample unit in order to find ones that best discriminate financially stable from unstable companies. As the result of logistic regression analysis, two credit scoring models were derived. First model include the liquidity, solvency and profitability ratios and is applicable for SME’s. With its classification accuracy of 97% the model has high predictive ability and can be used as an effective decision support tool. Second model is applicable for big companies and include only two independent variables – liquidity and solvency ratios. The classification accuracy of this model is 92,5% and, according to criteria of

  17. Forensic Entomology: Evaluating Uncertainty Associated With Postmortem Interval (PMI) Estimates With Ecological Models.

    Science.gov (United States)

    Faris, A M; Wang, H-H; Tarone, A M; Grant, W E

    2016-05-31

    Estimates of insect age can be informative in death investigations and, when certain assumptions are met, can be useful for estimating the postmortem interval (PMI). Currently, the accuracy and precision of PMI estimates is unknown, as error can arise from sources of variation such as measurement error, environmental variation, or genetic variation. Ecological models are an abstract, mathematical representation of an ecological system that can make predictions about the dynamics of the real system. To quantify the variation associated with the pre-appearance interval (PAI), we developed an ecological model that simulates the colonization of vertebrate remains by Cochliomyia macellaria (Fabricius) (Diptera: Calliphoridae), a primary colonizer in the southern United States. The model is based on a development data set derived from a local population and represents the uncertainty in local temperature variability to address PMI estimates at local sites. After a PMI estimate is calculated for each individual, the model calculates the maximum, minimum, and mean PMI, as well as the range and standard deviation for stadia collected. The model framework presented here is one manner by which errors in PMI estimates can be addressed in court when no empirical data are available for the parameter of interest. We show that PAI is a potential important source of error and that an ecological model is one way to evaluate its impact. Such models can be re-parameterized with any development data set, PAI function, temperature regime, assumption of interest, etc., to estimate PMI and quantify uncertainty that arises from specific prediction systems. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    Science.gov (United States)

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  19. Modelling ocean-colour-derived chlorophyll a

    Directory of Open Access Journals (Sweden)

    S. Dutkiewicz

    2018-01-01

    Full Text Available This article provides a proof of concept for using a biogeochemical/ecosystem/optical model with a radiative transfer component as a laboratory to explore aspects of ocean colour. We focus here on the satellite ocean colour chlorophyll a (Chl a product provided by the often-used blue/green reflectance ratio algorithm. The model produces output that can be compared directly to the real-world ocean colour remotely sensed reflectance. This model output can then be used to produce an ocean colour satellite-like Chl a product using an algorithm linking the blue versus green reflectance similar to that used for the real world. Given that the model includes complete knowledge of the (model water constituents, optics and reflectance, we can explore uncertainties and their causes in this proxy for Chl a (called derived Chl a in this paper. We compare the derived Chl a to the actual model Chl a field. In the model we find that the mean absolute bias due to the algorithm is 22 % between derived and actual Chl a. The real-world algorithm is found using concurrent in situ measurement of Chl a and radiometry. We ask whether increased in situ measurements to train the algorithm would improve the algorithm, and find a mixed result. There is a global overall improvement, but at the expense of some regions, especially in lower latitudes where the biases increase. Not surprisingly, we find that region-specific algorithms provide a significant improvement, at least in the annual mean. However, in the model, we find that no matter how the algorithm coefficients are found there can be a temporal mismatch between the derived Chl a and the actual Chl a. These mismatches stem from temporal decoupling between Chl a and other optically important water constituents (such as coloured dissolved organic matter and detrital matter. The degree of decoupling differs regionally and over time. For example, in many highly seasonal regions, the timing of initiation

  20. On state estimation in electric drives

    International Nuclear Information System (INIS)

    Leon, A.E.; Solsona, J.A.

    2010-01-01

    This paper deals with state estimation in electric drives. On one hand a nonlinear observer is designed, whereas on the other hand the speed state is estimated by using the dirty derivative from the position measured. The dirty derivative is an approximate version of the perfect derivative which introduces an estimation error few times analyzed in drive applications. For this reason, our proposal in this work consists in illustrating several aspects on the performance of the dirty derivator in presence of both model uncertainties and noisy measurements. To this end, a case study is introduced. The case study considers rotor speed estimation in a permanent magnet stepper motor, by assuming that rotor position and electrical variables are measured. In addition, this paper presents comments about the connection between dirty derivators and observers, and advantages and disadvantages of both techniques are also remarked.

  1. Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models

    Science.gov (United States)

    Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.

    2011-01-01

    Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.

  2. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  3. Surface tensor estimation from linear sections

    DEFF Research Database (Denmark)

    Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel

    2015-01-01

    From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....

  4. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  5. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  6. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  7. A Physically—Based Geometry Model for Transport Distance Estimation of Rainfall-Eroded Soil Sediment

    Directory of Open Access Journals (Sweden)

    Qian-Gui Zhang

    2016-01-01

    Full Text Available Estimations of rainfall-induced soil erosion are mostly derived from the weight of sediment measured in natural runoff. The transport distance of eroded soil is important for evaluating landscape evolution but is difficult to estimate, mainly because it cannot be linked directly to the eroded sediment weight. The volume of eroded soil is easier to calculate visually using popular imaging tools, which can aid in estimating the transport distance of eroded soil through geometry relationships. In this study, we present a straightforward geometry model to predict the maximum sediment transport distance incurred by rainfall events of various intensity and duration. In order to verify our geometry prediction model, a series of experiments are reported in the form of a sediment volume. The results show that cumulative rainfall has a linear relationship with the total volume of eroded soil. The geometry model can accurately estimate the maximum transport distance of eroded soil by cumulative rainfall, with a low root-mean-square error (4.7–4.8 and a strong linear correlation (0.74–0.86.

  8. Nutritional Preventive Behavior of Osteoporosis in Female Students: Applying Health Belief Model (HBM

    Directory of Open Access Journals (Sweden)

    Zahra Hosseini

    2017-01-01

    Full Text Available BackgroundOsteoporosis is one of the most important health problems and it is of great importance to prevent this disease. This study aimed to evaluate the nutritional preventive behavior of osteoporosis using health belief model in female students in Qom city, Iran.Materials and MethodsThis cross-sectional descriptive analytical study was conducted on 265 tenth to twelfth grade female students in Qom city. The subjects were selected via multistage sampling method. To collect data, we used a standard questionnaire based on health belief model. Data were analyzed by SPSS version 20.0 using independent t-test, Pearson correlation coefficient, and ANOVA. ResultsKnowledge and perceived self-efficacy had a positive and significant relationship with nutritional preventive behavior of osteoporosis (P=0.04, r=0.12 and P=0.004, r=0.18, respectively. However, perceived susceptibility and perceived barriers had a negative and significant relationship with nutritional preventive behavior of osteoporosis (P=0.02, r=-0.14 and P

  9. Effect of alcohol on skin permeation and metabolism of an ester-type prodrug in Yucatan micropig skin.

    Science.gov (United States)

    Fujii, Makiko; Ohara, Rieko; Matsumi, Azusa; Ohura, Kayoko; Koizumi, Naoya; Imai, Teruko; Watanabe, Yoshiteru

    2017-11-15

    We studied the effect that three alcohols, ethanol (EA), propanol (PA), and isopropanol (IPA), have on the skin permeation of p-hydroxy benzoic acid methyl ester (HBM), a model ester-type prodrug. HBM was applied to Yucatan micropig skin in a saturated phosphate buffered solution with or without 10% alcohol, and HBM and related materials in receptor fluid and skin were determined with HPLC. In the absence of alcohol, p-hydroxy benzoic acid (HBA), a metabolite of HBM, permeated the skin the most. The three alcohols enhanced the penetration of HBM at almost the same extent. The addition of 10% EA or PA to the HBM solution led to trans-esterification into the ethyl ester or propyl ester of HBA, and these esters permeated skin as well as HBA and HBM did. In contrast, the addition of 10% IPA promoted very little trans-esterification. Both hydrolysis and trans-esterification in the skin S9 fraction were inhibited by BNPP, an inhibitor of carboxylesterase (CES). Western blot and native PAGE showed the abundant expression of CES in micropig skin. Both hydrolysis and trans-esterification was simultaneously catalyzed by CES during skin permeation. Our data indicate that the alcohol used in dermal drug preparations should be selected not only for its ability to enhance the solubility and permeation of the drug, but also for the effect on metabolism of the drug in the skin. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    Science.gov (United States)

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  11. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    Science.gov (United States)

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  12. Local SAR in High Pass Birdcage and TEM Body Coils for Multiple Human Body Models in Clinical Landmark Positions at 3T

    Science.gov (United States)

    Yeo, Desmond TB; Wang, Zhangwei; Loew, Wolfgang; Vogel, Mika W; Hancu, Ileana

    2011-01-01

    Purpose To use EM simulations to study the effects of body type, landmark position, and RF body coil type on peak local SAR in 3T MRI. Materials and Methods Numerically computed peak local SAR for four human body models (HBMs) in three landmark positions (head, heart, pelvic) were compared for a high-pass birdcage and a transverse electromagnetic 3T body coil. Local SAR values were normalized to the IEC whole-body average SAR limit of 2.0 W/kg for normal scan mode. Results Local SAR distributions were highly variable. Consistent with previous reports, the peak local SAR values generally occurred in the neck-shoulder area, near rungs, or between tissues of greatly differing electrical properties. The HBM type significantly influenced the peak local SAR, with stockier HBMs, extending extremities towards rungs, displaying the highest SAR. There was also a trend for higher peak SAR in the head-centric and heart-centric positions. The impact of the coil-types studied was not statistically significant. Conclusion The large variability in peak local SAR indicates the need to include more than one HBM or landmark position when evaluating safety of body coils. It is recommended that a HBM with arms near the rungs be included, to create physically realizable high-SAR scenarios. PMID:21509880

  13. Lagrangian speckle model and tissue-motion estimation--theory.

    Science.gov (United States)

    Maurice, R L; Bertrand, M

    1999-07-01

    It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.

  14. Low Back Pain Preventive Behaviors Among Nurses Based on the Health Belief Model Constructs

    Directory of Open Access Journals (Sweden)

    Naser Sharafkhani

    2014-12-01

    Full Text Available The nursing profession is physically demanding as it is ranked second from the viewpoint of physical activity, following industrial occupations. Nursing is considered a profession with high musculoskeletal disorders, specifically low back pain. This article evaluated the nurses’ educational needs based on the Health Belief Model (HBM with focus on the low back pain and adoption of preventive behaviors. This analytical cross-sectional study was conducted on 133 nurses who were selected randomly from three public educational hospitals affiliated with Arak University of Medical Sciences. Data collection was performed with a questionnaire, which included demographic characteristics, questions on HBM constructs, and a checklist for explaining the performances. The collected data were analyzed using descriptive and analytical tests and Pearson’s correlation coefficient. In this study, among the HBM constructs, the cues to action and the perceived barriers were the main predictors of optimal performance among the sample subjects (B = 0.09, p < .01. Moreover, there was a significant relationship between the nurses’ performance on adopting the preventive behaviors and the scores of perceived barriers, self-efficacy, and cues to action (p < .05. However, no significant relationship was observed between the nurses’ performance and perceived susceptibility, severity, and benefits. In this study, as for behavior barriers, the nurses complained about unfamiliarity with the workplace ergonomics and inappropriate conditions based on ergonomic principles, which requires educational planning with the aim of overcoming perceived barriers, improving managerial activities, and enhancing the working place conditions.

  15. Offline estimation of decay time for an optical cavity with a low pass filter cavity model.

    Science.gov (United States)

    Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C

    2012-08-01

    This Letter presents offline estimation results for the decay-time constant for an experimental Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The cavity dynamics are modeled in terms of a low pass filter (LPF) with unity DC gain. This model is used by an extended Kalman filter (EKF) along with the recorded light intensity at the output of the cavity in order to estimate the decay-time constant. The estimation results using the LPF cavity model are compared to those obtained using the quadrature model for the cavity presented in previous work by Kallapur et al. The estimation process derived using the LPF model comprises two states as opposed to three states in the quadrature model. When considering the EKF, this means propagating two states and a (2×2) covariance matrix using the LPF model, as opposed to propagating three states and a (3×3) covariance matrix using the quadrature model. This gives the former model a computational advantage over the latter and leads to faster execution times for the corresponding EKF. It is shown in this Letter that the LPF model for the cavity with two filter states is computationally more efficient, converges faster, and is hence a more suitable method than the three-state quadrature model presented in previous work for real-time estimation of the decay-time constant for the cavity.

  16. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  17. Breast cancer literacy and health beliefs related to breast cancer screening among American Indian women.

    Science.gov (United States)

    Roh, Soonhee; Burnette, Catherine E; Lee, Yeon-Shim; Jun, Jung Sim; Lee, Hee Yun; Lee, Kyoung Hag

    2018-08-01

    The purpose of this article is to examine the health beliefs and literacy about breast cancer and their relationship with breast cancer screening among American Indian (AI) women. Using the Health Belief Model (HBM) and hierarchical logistic regression with data from a sample of 286 AI female adults residing in the Northern Plains, we found that greater awareness of breast cancer screening was linked to breast cancer screening practices. However, perceived barriers, one of the HBM constructs, prevented such screening practices. This study suggested that culturally relevant HBM factors should be targeted when developing culturally sensitive breast cancer prevention efforts.

  18. A spatial structural derivative model for ultraslow diffusion

    Directory of Open Access Journals (Sweden)

    Xu Wei

    2017-01-01

    Full Text Available This study investigates the ultraslow diffusion by a spatial structural derivative, in which the exponential function ex is selected as the structural function to construct the local structural derivative diffusion equation model. The analytical solution of the diffusion equation is a form of Biexponential distribution. Its corresponding mean squared displacement is numerically calculated, and increases more slowly than the logarithmic function of time. The local structural derivative diffusion equation with the structural function ex in space is an alternative physical and mathematical modeling model to characterize a kind of ultraslow diffusion.

  19. Reduced density gradient as a novel approach for estimating QSAR descriptors, and its application to 1, 4-dihydropyridine derivatives with potential antihypertensive effects.

    Science.gov (United States)

    Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G

    2016-12-01

    The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.

  20. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  1. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  2. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  3. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    Science.gov (United States)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  4. Applying the Health Belief Model and an Integrated Behavioral Model to Promote Breast Tissue Donation Among Asian Americans.

    Science.gov (United States)

    Shafer, Autumn; Kaufhold, Kelly; Luo, Yunjuan

    2018-07-01

    An important part in the effort to prevent, treat, and cure breast cancer is research done with healthy breast tissue. The Susan G. Komen for the Cure Tissue Bank at Indiana University Simon Cancer Center (KTB) encourages women to donate a small amount of healthy breast tissue and then provides that tissue to researchers studying breast cancer. Although KTB has a large donor base, the volume of tissue samples from Asian women is low despite prior marketing efforts to encourage donation among this population. This study builds on prior work promoting breast cancer screenings among Asian women by applying constructs from the Health Belief Model (HBM) and the Integrated Behavioral Model (IBM) to investigate why Asian-American women are less inclined to donate their healthy breast tissue than non-Asian women and how this population may be motivated to donate in the future. A national online survey (N = 1,317) found Asian women had significantly lower perceived severity, some lower perceived benefits, and higher perceived barriers to tissue donation than non-Asian women under HBM and significantly lower injunctive norms supporting breast tissue donation, lower perceived behavioral control, and lower intentions to donate under IBM. This study also compares and discusses similarities and differences among East, Southeast, and South Asian women on these same constructs.

  5. Prediction of safe driving Behaviours based on health belief model: the case of taxi drivers in Bandar Abbas, Iran.

    Science.gov (United States)

    Razmara, Asghar; Aghamolaei, Teamur; Madani, Abdoulhossain; Hosseini, Zahra; Zare, Shahram

    2018-03-20

    Road accidents are among the main causes of mortality. As safe and secure driving is a key strategy to reduce car injuries and offenses, the present research aimed to explore safe driving behaviours among taxi drivers based on the Health Belief Model (HBM). This study was conducted on 184 taxi drivers in Bandar Abbas who were selected based on a multiple stratified sampling method. Data were collected by a questionnaire comprised of a demographic information section along with the constructs of the HBM. Data were analysed by SPSS ver19 via a Pearson's correlation coefficient and multiple regressions. The mean age of the participants was 45.1 years (SD = 11.1). They all had, on average, 10.3 (SD = 7/5) years of taxi driving experience. Among the HBM components, cues to action and perceived benefits were shown to be positively correlated with safe driving behaviours, while perceived barriers were negatively correlated. Cues to action, perceived barriers and perceived benefits were shown to be the strongest predictors of a safe drivers' behaviour. Based on the results of this study in designing health promotion programmes to improve safe driving behaviours among taxi drivers, cues to action, perceived benefits and perceived barriers are important. Therefore, advertising, the design of information campaigns, emphasis on the benefits of safe driving behaviours and modification barriers are recommended.

  6. Semi-Nonparametric Estimation and Misspecification Testing of Diffusion Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis

    of the estimators and tests under the null are derived, and the power properties are analyzed by considering contiguous alternatives. Test directly comparing the drift and diffusion estimators under the relevant null and alternative are also analyzed. Markov Bootstrap versions of the test statistics are proposed...... to improve on the finite-sample approximations. The finite sample properties of the estimators are examined in a simulation study....

  7. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  8. Derivative Spectrophotometric Method for Estimation of Antiretroviral Drugs in Fixed Dose Combinations

    Science.gov (United States)

    P.B., Mohite; R.B., Pandhare; S.G., Khanage

    2012-01-01

    Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed. PMID:24312779

  9. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond; Maity, Arnab; Mammen, Enno; Yu, Kyusang

    2009-01-01

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  10. Efficient Semiparametric Marginal Estimation for the Partially Linear Additive Model for Longitudinal/Clustered Data

    KAUST Repository

    Carroll, Raymond

    2009-04-23

    We consider the efficient estimation of a regression parameter in a partially linear additive nonparametric regression model from repeated measures data when the covariates are multivariate. To date, while there is some literature in the scalar covariate case, the problem has not been addressed in the multivariate additive model case. Ours represents a first contribution in this direction. As part of this work, we first describe the behavior of nonparametric estimators for additive models with repeated measures when the underlying model is not additive. These results are critical when one considers variants of the basic additive model. We apply them to the partially linear additive repeated-measures model, deriving an explicit consistent estimator of the parametric component; if the errors are in addition Gaussian, the estimator is semiparametric efficient. We also apply our basic methods to a unique testing problem that arises in genetic epidemiology; in combination with a projection argument we develop an efficient and easily computed testing scheme. Simulations and an empirical example from nutritional epidemiology illustrate our methods.

  11. Evapotranspiration estimation using a parameter-parsimonious energy partition model over Amazon basin

    Science.gov (United States)

    Xu, D.; Agee, E.; Wang, J.; Ivanov, V. Y.

    2017-12-01

    The increased frequency and severity of droughts in the Amazon region have emphasized the potential vulnerability of the rainforests to heat and drought-induced stresses, highlighting the need to reduce the uncertainty in estimates of regional evapotranspiration (ET) and quantify resilience of the forest. Ground-based observations for estimating ET are resource intensive, making methods based on remotely sensed observations an attractive alternative. Several methodologies have been developed to estimate ET from satellite data, but challenges remained in model parameterization and satellite limited coverage reducing their utility for monitoring biodiverse regions. In this work, we apply a novel surface energy partition method (Maximum Entropy Production; MEP) based on Bayesian probability theory and nonequilibrium thermodynamics to derive ET time series using satellite data for Amazon basin. For a large, sparsely monitored region such as the Amazon, this approach has the advantage methods of only using single level measurements of net radiation, temperature, and specific humidity data. Furthermore, it is not sensitive to the uncertainty of the input data and model parameters. In this first application of MEP theory for a tropical forest biome, we assess its performance at various spatiotemporal scales against a diverse field data sets. Specifically, the objective of this work is to test this method using eddy flux data for several locations across the Amazonia at sub-daily, monthly, and annual scales and compare the new estimates with those using traditional methods. Analyses of the derived ET time series will contribute to reducing the current knowledge gap surrounding the much debated response of the Amazon Basin region to droughts and offer a template for monitoring the long-term changes in global hydrologic cycle due to anthropogenic and natural causes.

  12. Structure activity relationships of quinoxalin-2-one derivatives as platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors, derived from molecular modeling.

    Science.gov (United States)

    Mori, Yoshikazu; Hirokawa, Takatsugu; Aoki, Katsuyuki; Satomi, Hisanori; Takeda, Shuichi; Aburada, Masaki; Miyamoto, Ken-ichi

    2008-05-01

    We previously reported a quinoxalin-2-one compound (Compound 1) that had inhibitory activity equivalent to existing platelet-derived growth factor-beta receptor (PDGFbeta R) inhibitors. Lead optimization of Compound 1 to increase its activity and selectivity, using structural information regarding PDGFbeta R-ligand interactions, is urgently needed. Here we present models of the PDGFbeta R kinase domain complexed with quinoxalin-2-one derivatives. The models were constructed using comparative modeling, molecular dynamics (MD) and ligand docking. In particular, conformations derived from MD, and ligand binding site information presented by alpha-spheres in the pre-docking processing, allowed us to identify optimal protein structures for docking of target ligands. By carrying out molecular modeling and MD of PDGFbeta R in its inactive state, we obtained two structural models having good Compound 1 binding potentials. In order to distinguish the optimal candidate, we evaluated the structural activity relationships (SAR) between the ligand-binding free energies and inhibitory activity values (IC50 values) for available quinoxalin-2-one derivatives. Consequently, a final model with a high SAR was identified. This model included a molecular interaction between the hydrophobic pocket behind the ATP binding site and the substitution region of the quinoxalin-2-one derivatives. These findings should prove useful in lead optimization of quinoxalin-2-one derivatives as PDGFb R inhibitors.

  13. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  14. Data assimilation within the Advanced Circulation (ADCIRC) modeling framework for the estimation of Manning's friction coefficient

    KAUST Repository

    Mayo, Talea; Butler, Troy; Dawson, Clint N.; Hoteit, Ibrahim

    2014-01-01

    Coastal ocean models play a major role in forecasting coastal inundation due to extreme events such as hurricanes and tsunamis. Additionally, they are used to model tides and currents under more moderate conditions. The models numerically solve the shallow water equations, which describe conservation of mass and momentum for processes with large horizontal length scales relative to the vertical length scales. The bottom stress terms that arise in the momentum equations can be defined through the Manning's n formulation, utilizing the Manning's n coefficient. The Manning's n coefficient is an empirically derived, spatially varying parameter, and depends on many factors such as the bottom surface roughness. It is critical to the accuracy of coastal ocean models, however, the coefficient is often unknown or highly uncertain. In this work we reformulate a statistical data assimilation method generally used in the estimation of model state variables to estimate this model parameter. We show that low-dimensional representations of Manning's n coefficients can be recovered by assimilating water elevation data. This is a promising approach to parameter estimation in coastal ocean modeling. © 2014 Elsevier Ltd.

  15. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  16. Evaluation of three energy balance-based evaporation models for estimating monthly evaporation for five lakes using derived heat storage changes from a hysteresis model

    Science.gov (United States)

    Duan, Zheng; Bastiaanssen, W. G. M.

    2017-02-01

    The heat storage changes (Q t) can be a significant component of the energy balance in lakes, and it is important to account for Q t for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Q t has been often neglected in many studies due to the lack of required water temperature data. A simple hysteresis model (Q t = a*Rn + b + c* dRn/dt) has been demonstrated to reasonably estimate Q t from the readily available net all wave radiation (Rn) and three locally calibrated coefficients (a-c) for lakes and reservoirs. As a follow-up study, we evaluated whether this hysteresis model could enable energy balance-based evaporation models to yield good evaporation estimates. The representative monthly evaporation data were compiled from published literature and used as ground-truth to evaluate three energy balance-based evaporation models for five lakes. The three models in different complexity are De Bruin-Keijman (DK), Penman, and a new model referred to as Duan-Bastiaanssen (DB). All three models require Q t as input. Each model was run in three scenarios differing in the input Q t (S1: measured Q t; S2: modelled Q t from the hysteresis model; S3: neglecting Q t) to evaluate the impact of Q t on the modelled evaporation. Evaluation showed that the modelled Q t agreed well with measured counterparts for all five lakes. It was confirmed that the hysteresis model with locally calibrated coefficients can predict Q t with good accuracy for the same lake. Using modelled Q t as inputs all three evaporation models yielded comparably good monthly evaporation to those using measured Q t as inputs and significantly better than those neglecting Q t for the five lakes. The DK model requiring minimum data generally performed the best, followed by the Penman and DB model. This study demonstrated that once three coefficients are locally calibrated using historical data the simple hysteresis model can offer

  17. Willingness to use functional breads. Applying the Health Belief Model across four European countries.

    Science.gov (United States)

    Vassallo, Marco; Saba, Anna; Arvola, Anne; Dean, Moira; Messina, Federico; Winkelmann, Markus; Claupein, Erika; Lähteenmäki, Liisa; Shepherd, Richard

    2009-04-01

    The present study focused on the role of the Health Belief Model (HBM) in predicting willingness to use functional breads, across four European countries: UK (N=552), Italy (N=504), Germany (N=525) and Finland (N=513). The behavioural evaluation components of the HBM (the perceived benefits and barriers conceptualized respectively as perceived healthiness and pleasantness) and the health motivation component were good predictors of willingness to use functional breads whereas threat perception components (perceived susceptibility and perceived anticipated severity) failed as predictors. This result was common in all four countries and across products. The role of 'cue to action' was marginal. On the whole the HBM fit was similar across the countries and products in terms of significant predictors (the perceived benefits, barriers and health motivation) with the exception of self-efficacy which was significant only in Finland. Young consumers seemed more interested in the functional bread with a health claim promoting health rather than in reducing risk of disease, whereas the opposite was true for older people. However, functional staple foods, such as bread in this European study, are still perceived as common foods rather than as a means of avoiding diseases. Consumers seek these foods for their healthiness (the perceived benefits) as they expect them to be healthier than regular foods and for the pleasantness (the perceived barriers) as they do not expect any change in the sensory characteristics due to the addition of the functional ingredients. The importance of health motivation in willingness to use products with health claims implies that there is an opening for developing better models for explaining health-promoting food choices that take into account both food and health-related factors without making a reference to disease-related outcome.

  18. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  19. A 'simple' hybrid model for power derivatives

    International Nuclear Information System (INIS)

    Lyle, Matthew R.; Elliott, Robert J.

    2009-01-01

    This paper presents a method for valuing power derivatives using a supply-demand approach. Our method extends work in the field by incorporating randomness into the base load portion of the supply stack function and equating it with a noisy demand process. We obtain closed form solutions for European option prices written on average spot prices considering two different supply models: a mean-reverting model and a Markov chain model. The results are extensions of the classic Black-Scholes equation. The model provides a relatively simple approach to describe the complicated price behaviour observed in electricity spot markets and also allows for computationally efficient derivatives pricing. (author)

  20. Evaluation of breast self-examination program using Health Belief Model in female students

    Directory of Open Access Journals (Sweden)

    Mitra Moodi

    2011-01-01

    Full Text Available Background: Breast cancer has been considered as a major health problem in females, because of its high incidence in recent years. Due to the role of breast self-examination (BSE in early diagnosis and prevention of morbidity and mortality rate of breast cancer, promoting student knowledge, capabilities and attitude are required in this regard. This study was conducted to evaluation BSE education in female University students using Health Belief Model. Methods: In this semi-experimental study, 243 female students were selected using multi-stage randomized sampling in 2008. The data were collected by validated and reliable questionnaire (43 questions before intervention and one week after intervention. The intervention program was consisted of one educational session lasting 120 minutes by lecturing and showing a film based on HBM constructs. The obtained data were analyzed by SPSS (version11.5 using statistical paired t-test and ANOVA at the significant level of α = 0.05. Results: 243 female students aged 20.6 ± 2.8 years old were studied. Implementing the educational program resulted in increased knowledge and HBM (perceived susceptibility, severity, benefit and barrier scores in the students (p ≤ 0.01. Significant increases were also observed in knowledge and perceived benefit after the educational program (p ≤ 0.05. ANOVA statistical test showed significant difference in perceived benefit score in students of different universities (p = 0.05. Conclusions: Due to the positive effects of education on increasing knowledge and attitude of university students about BSE, the efficacy of the HBM in BSE education for female students was confirmed.

  1. Comparative Assessment of Two Vegetation Fractional Cover Estimating Methods and Their Impacts on Modeling Urban Latent Heat Flux Using Landsat Imagery

    Directory of Open Access Journals (Sweden)

    Kai Liu

    2017-05-01

    Full Text Available Quantifying vegetation fractional cover (VFC and assessing its role in heat fluxes modeling using medium resolution remotely sensed data has received less attention than it deserves in heterogeneous urban regions. This study examined two approaches (Normalized Difference Vegetation Index (NDVI-derived and Multiple Endmember Spectral Mixture Analysis (MESMA-derived methods that are commonly used to map VFC based on Landsat imagery, in modeling surface heat fluxes in urban landscape. For this purpose, two different heat flux models, Two-source energy balance (TSEB model and Pixel Component Arranging and Comparing Algorithm (PCACA model, were adopted for model evaluation and analysis. A comparative analysis of the NDVI-derived and MESMA-derived VFCs showed that the latter achieved more accurate estimates in complex urban regions. When the two sources of VFCs were used as inputs to both TSEB and PCACA models, MESMA-derived urban VFC produced more accurate urban heat fluxes (Bowen ratio and latent heat flux relative to NDVI-derived urban VFC. Moreover, our study demonstrated that Landsat imagery-retrieved VFC exhibited greater uncertainty in obtaining urban heat fluxes for the TSEB model than for the PCACA model.

  2. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  3. Evaluation of the National Research Council (2001) dairy model and derivation of new prediction equations. 1. Digestibility of fiber, fat, protein, and nonfiber carbohydrate.

    Science.gov (United States)

    White, R R; Roman-Garcia, Y; Firkins, J L; VandeHaar, M J; Armentano, L E; Weiss, W P; McGill, T; Garnett, R; Hanigan, M D

    2017-05-01

    Evaluation of ration balancing systems such as the National Research Council (NRC) Nutrient Requirements series is important for improving predictions of animal nutrient requirements and advancing feeding strategies. This work used a literature data set (n = 550) to evaluate predictions of total-tract digested neutral detergent fiber (NDF), fatty acid (FA), crude protein (CP), and nonfiber carbohydrate (NFC) estimated by the NRC (2001) dairy model. Mean biases suggested that the NRC (2001) lactating cow model overestimated true FA and CP digestibility by 26 and 7%, respectively, and under-predicted NDF digestibility by 16%. All NRC (2001) estimates had notable mean and slope biases and large root mean squared prediction error (RMSPE), and concordance (CCC) ranged from poor to good. Predicting NDF digestibility with independent equations for legumes, corn silage, other forages, and nonforage feeds improved CCC (0.85 vs. 0.76) compared with the re-derived NRC (2001) equation form (NRC equation with parameter estimates re-derived against this data set). Separate FA digestion coefficients were derived for different fat supplements (animal fats, oils, and other fat types) and for the basal diet. This equation returned improved (from 0.76 to 0.94) CCC compared with the re-derived NRC (2001) equation form. Unique CP digestibility equations were derived for forages, animal protein feeds, plant protein feeds, and other feeds, which improved CCC compared with the re-derived NRC (2001) equation form (0.74 to 0.85). New NFC digestibility coefficients were derived for grain-specific starch digestibilities, with residual organic matter assumed to be 98% digestible. A Monte Carlo cross-validation was performed to evaluate repeatability of model fit. In this procedure, data were randomly subsetted 500 times into derivation (60%) and evaluation (40%) data sets, and equations were derived using the derivation data and then evaluated against the independent evaluation data. Models

  4. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  5. Predicting intention to attend and actual attendance at a universal parent-training programme: a comparison of social cognition models.

    Science.gov (United States)

    Thornton, Sarah; Calam, Rachel

    2011-07-01

    The predictive validity of the Health Belief Model (HBM) and the Theory of Planned Behaviour (TPB) were examined in relation to 'intention to attend' and 'actual attendance' at a universal parent-training intervention for parents of children with behavioural difficulties. A validation and reliability study was conducted to develop two questionnaires (N = 108 parents of children aged 4-7).These questionnaires were then used to investigate the predictive validity of the two models in relation to 'intention to attend' and 'actual attendance' at a parent-training intervention ( N = 53 parents of children aged 4-7). Both models significantly predicted 'intention to attend a parent-training group'; however, the TPB accounted for more variance in the outcome variable compared to the HBM. Preliminary investigations highlighted that attendees were more likely to intend to attend the groups, have positive attitudes towards the groups, perceive important others as having positive attitudes towards the groups, and report elevated child problem behaviour scores. These findings provide useful information regarding the belief-based factors that affect attendance at universal parent-training groups. Possible interventions aimed at increasing 'intention to attend' and 'actual attendance' at parent-training groups are discussed.

  6. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  7. Theoretical Derivation of Simplified Evaluation Models for the First Peak of a Criticality Accident in Nuclear Fuel Solution

    International Nuclear Information System (INIS)

    Nomura, Yasushi

    2000-01-01

    In a reprocessing facility where nuclear fuel solutions are processed, one could observe a series of power peaks, with the highest peak right after a criticality accident. The criticality alarm system (CAS) is designed to detect the first power peak and warn workers near the reacting material by sounding alarms immediately. Consequently, exposure of the workers would be minimized by an immediate and effective evacuation. Therefore, in the design and installation of a CAS, it is necessary to estimate the magnitude of the first power peak and to set up the threshold point where the CAS initiates the alarm. Furthermore, it is necessary to estimate the level of potential exposure of workers in the case of accidents so as to decide the appropriateness of installing a CAS for a given compartment.A simplified evaluation model to estimate the minimum scale of the first power peak during a criticality accident is derived by theoretical considerations only for use in the design of a CAS to set up the threshold point triggering the alarm signal. Another simplified evaluation model is derived in the same way to estimate the maximum scale of the first power peak for use in judging the appropriateness for installing a CAS. Both models are shown to have adequate margin in predicting the minimum and maximum scale of criticality accidents by comparing their results with French CRiticality occurring ACcidentally (CRAC) experimental data

  8. Parameter Estimation of a Delay Time Model of Wearing Parts Based on Objective Data

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2015-01-01

    Full Text Available The wearing parts of a system have a very high failure frequency, making it necessary to carry out continual functional inspections and maintenance to protect the system from unscheduled downtime. This allows for the collection of a large amount of maintenance data. Taking the unique characteristics of the wearing parts into consideration, we establish their respective delay time models in ideal inspection cases and nonideal inspection cases. The model parameters are estimated entirely using the collected maintenance data. Then, a likelihood function of all renewal events is derived based on their occurring probability functions, and the model parameters are calculated with the maximum likelihood function method, which is solved by the CRM. Finally, using two wearing parts from the oil and gas drilling industry as examples—the filter element and the blowout preventer rubber core—the parameters of the distribution function of the initial failure time and the delay time for each example are estimated, and their distribution functions are obtained. Such parameter estimation based on objective data will contribute to the optimization of the reasonable function inspection interval and will also provide some theoretical models to support the integrity management of equipment or systems.

  9. On the use of Monte Carlo-derived dosimetric data in the estimation of patient dose from CT examinations

    International Nuclear Information System (INIS)

    Perisinakis, Kostas; Tzedakis, Antonis; Damilakis, John

    2008-01-01

    The purpose of this work was to investigate the applicability and appropriateness of Monte Carlo-derived normalized data to provide accurate estimations of patient dose from computed tomography (CT) exposures. Monte Carlo methodology and mathematical anthropomorphic phantoms were used to simulate standard patient CT examinations of the head, thorax, abdomen, and trunk performed on a multislice CT scanner. Phantoms were generated to simulate the average adult individual and two individuals with different body sizes. Normalized dose values for all radiosensitive organs and normalized effective dose values were calculated for standard axial and spiral CT examinations. Discrepancies in CT dosimetry using Monte Carlo-derived coefficients originating from the use of: (a) Conversion coefficients derived for axial CT exposures, (b) a mathematical anthropomorphic phantom of standard body size to derive conversion coefficients, and (c) data derived for a specific CT scanner to estimate patient dose from CT examinations performed on a different scanner, were separately evaluated. The percentage differences between the normalized organ dose values derived for contiguous axial scans and the corresponding values derived for spiral scans with pitch=1 and the same total scanning length were up to 10%, while the corresponding percentage differences in normalized effective dose values were less than 0.7% for all standard CT examinations. The normalized organ dose values for standard spiral CT examinations with pitch 0.5-1.5 were found to differ from the corresponding values derived for contiguous axial scans divided by the pitch, by less than 14% while the corresponding percentage differences in normalized effective dose values were less than 1% for all standard CT examinations. Normalized effective dose values for the standard contiguous axial CT examinations derived by Monte Carlo simulation were found to considerably decrease with increasing body size of the mathematical phantom

  10. The estimation of derived limits

    International Nuclear Information System (INIS)

    Harrison, N.T.; Bryant, P.M.; Clarke, R.H.; Morley, F.

    1979-08-01

    In practical radiation protection, it is often necessary to calculate limits of intake of radionuclides associated with various quantities; such limits are needed, for example, to assess the adequacy of the control of environmental contamination. In publication 26 of the International Commission on Radiological Protection (ICRP), these limits, when related to the basic limits of dose-equivalent by a defined model, are referred to as Derived Limits (DLs). In the present report the principles to be adopted by the Board in calculating DLs to be recommended for general application within the United Kingdom are outlined. DLs will be recommended for a wide range of radionuclides and for circumstances relevant to the workplace, and, more frequently, the general environment. The latter will include DLs in foodstuffs and associated environmental materials, such as soil and grass, and DLs for discharges from stacks. DLs will be related to dose equivalents for workers or members of the public for stochastic or non-stochastic effects as appropriate. Consideration will be given to relevant data on radiosensitivity, metabolism and dosimetry for children and to the physicochemical forms of radionuclides. (author)

  11. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    Science.gov (United States)

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  12. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  13. Bayesian estimation and entropy for economic dynamic stochastic models: An exploration of overconsumption

    International Nuclear Information System (INIS)

    Argentiero, Amedeo; Bovi, Maurizio; Cerqueti, Roy

    2016-01-01

    This paper examines psycho-induced overconsumption in a dynamic stochastic context. As emphasized by well-established psychological results, these psycho-distortions derive from a decision making based on simple rules-of-thumb, not on analytically sounded optimizations. To our end, we therefore compare two New Keynesian models. The first is populated by optimizing Muth-rational agents and acts as the normative benchmark. The other is a “psycho-perturbed” version of the benchmark that allows for the potential presence of overoptimism and, hence, of overconsumption. The parameters of these models are estimated through a Bayesian-type procedure, and performances are evaluated by employing an entropy measure. Such methodologies are particularly appropriate here since they take in full consideration the complexity generated by the randomness of the considered systems. In particular, they let to derive a not negligible information on the size and on the cyclical properties of the biases. In line with cognitive psychology suggestions our evidence shows that the overoptimism/overconsumption is: widespread—it is detected in nation-wide data; persistent—it emerges in full-sample estimations; it moves according to the expected cyclical behavior—larger in booms, and it disappears in crises. Moreover, by taking into account the effect of these psycho-biases, the model fits actual data better than the benchmark. All considered, then, enhancing the existing literature our findings: i) sustain the importance of inserting psychological distortions in macroeconomic models and ii) underline that system dynamics and psycho biases have statistically significant and economically important connections.

  14. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  15. Modeling of heat conduction via fractional derivatives

    Science.gov (United States)

    Fabrizio, Mauro; Giorgi, Claudio; Morro, Angelo

    2017-09-01

    The modeling of heat conduction is considered by letting the time derivative, in the Cattaneo-Maxwell equation, be replaced by a derivative of fractional order. The purpose of this new approach is to overcome some drawbacks of the Cattaneo-Maxwell equation, for instance possible fluctuations which violate the non-negativity of the absolute temperature. Consistency with thermodynamics is shown to hold for a suitable free energy potential, that is in fact a functional of the summed history of the heat flux, subject to a suitable restriction on the set of admissible histories. Compatibility with wave propagation at a finite speed is investigated in connection with temperature-rate waves. It follows that though, as expected, this is the case for the Cattaneo-Maxwell equation, the model involving the fractional derivative does not allow the propagation at a finite speed. Nevertheless, this new model provides a good description of wave-like profiles in thermal propagation phenomena, whereas Fourier's law does not.

  16. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  17. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  18. Satellite-derived land covers for runoff estimation using SCS-CN method in Chen-You-Lan Watershed, Taiwan

    Science.gov (United States)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2017-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.

  19. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  20. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    Science.gov (United States)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  1. Data assimilation within the Advanced Circulation (ADCIRC) modeling framework for the estimation of Manning's friction coefficient

    KAUST Repository

    Mayo, Talea

    2014-04-01

    Coastal ocean models play a major role in forecasting coastal inundation due to extreme events such as hurricanes and tsunamis. Additionally, they are used to model tides and currents under more moderate conditions. The models numerically solve the shallow water equations, which describe conservation of mass and momentum for processes with large horizontal length scales relative to the vertical length scales. The bottom stress terms that arise in the momentum equations can be defined through the Manning\\'s n formulation, utilizing the Manning\\'s n coefficient. The Manning\\'s n coefficient is an empirically derived, spatially varying parameter, and depends on many factors such as the bottom surface roughness. It is critical to the accuracy of coastal ocean models, however, the coefficient is often unknown or highly uncertain. In this work we reformulate a statistical data assimilation method generally used in the estimation of model state variables to estimate this model parameter. We show that low-dimensional representations of Manning\\'s n coefficients can be recovered by assimilating water elevation data. This is a promising approach to parameter estimation in coastal ocean modeling. © 2014 Elsevier Ltd.

  2. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  3. Estimating genetic effect sizes under joint disease-endophenotype models in presence of gene-environment interactions

    Directory of Open Access Journals (Sweden)

    Alexandre eBureau

    2015-07-01

    Full Text Available Effects of genetic variants on the risk of complex diseases estimated from association studies are typically small. Nonetheless, variants may have important effects in presence of specific levels of environmental exposures, and when a trait related to the disease (endophenotype is either normal or impaired. We propose polytomous and transition models to represent the relationship between disease, endophenotype, genotype and environmental exposure in family studies. Model coefficients were estimated using generalized estimating equations and were used to derive gene-environment interaction effects and genotype effects at specific levels of exposure. In a simulation study, estimates of the effect of a genetic variant were substantially higher when both an endophenotype and an environmental exposure modifying the variant effect were taken into account, particularly under transition models, compared to the alternative of ignoring the endophenotype. Illustration of the proposed modeling with the metabolic syndrome, abdominal obesity, physical activity and polymorphisms in the NOX3 gene in the Quebec Family Study revealed that the positive association of the A allele of rs1375713 with the metabolic syndrome at high levels of physical activity was only detectable in subjects without abdominal obesity, illustrating the importance of taking into account the abdominal obesity endophenotype in this analysis.

  4. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  5. Estimation of ibuprofen and famotidine in tablets by second order derivative spectrophotometery method

    Directory of Open Access Journals (Sweden)

    Dimal A. Shah

    2017-02-01

    Full Text Available A simple and accurate method for the analysis of ibuprofen (IBU and famotidine (FAM in their combined dosage form was developed using second order derivative spectrophotometery. IBU and FAM were quantified using second derivative responses at 272.8 nm and 290 nm in the spectra of their solutions in methanol. The calibration curves were linear in the concentration range of 100–600 μg/mL for IBU and 5–25 μg/mL for FAM. The method was validated and found to be accurate and precise. Developed method was successfully applied for the estimation of IBU and FAM in their combined dosage form.

  6. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  7. Price models for oil derivates in Slovenia

    International Nuclear Information System (INIS)

    Nemac, F.; Saver, A.

    1995-01-01

    In Slovenia, a law is currently applied according to which any change in the price of oil derivatives is subject to the Governmental approval. Following the target of getting closer to the European Union, the necessity has arisen of finding ways for the introduction of liberalization or automated approach to price modifications depending on oscillations of oil derivative prices on the world market and the rate of exchange of the American dollar. It is for this reason that at the Agency for Energy Restructuring we made a study for the Ministry of Economic Affairs and Development regarding this issue. We analysed the possible models for the formation of oil derivative prices for Slovenia. Based on the assessment of experiences of primarily the west European countries, we proposed three models for the price formation for Slovenia. In future, it is expected that the Government of the Republic of Slovenia will make a selection of one of the proposed models to be followed by enforcement of price liberalization. The paper presents two representative models for price formation as used in Austria and Portugal. In the continuation the authors analyse the application of three models that they find suitable for the use in Slovenia. (author)

  8. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  9. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  10. Estimating model parameters in nonautonomous chaotic systems using synchronization

    International Nuclear Information System (INIS)

    Yang, Xiaoli; Xu, Wei; Sun, Zhongkui

    2007-01-01

    In this Letter, a technique is addressed for estimating unknown model parameters of multivariate, in particular, nonautonomous chaotic systems from time series of state variables. This technique uses an adaptive strategy for tracking unknown parameters in addition to a linear feedback coupling for synchronizing systems, and then some general conditions, by means of the periodic version of the LaSalle invariance principle for differential equations, are analytically derived to ensure precise evaluation of unknown parameters and identical synchronization between the concerned experimental system and its corresponding receiver one. Exemplifies are presented by employing a parametrically excited 4D new oscillator and an additionally excited Ueda oscillator. The results of computer simulations reveal that the technique not only can quickly track the desired parameter values but also can rapidly respond to changes in operating parameters. In addition, the technique can be favorably robust against the effect of noise when the experimental system is corrupted by bounded disturbance and the normalized absolute error of parameter estimation grows almost linearly with the cutoff value of noise strength in simulation

  11. 'Sink or swim': an evaluation of the clinical characteristics of individuals with high bone mass.

    LENUS (Irish Health Repository)

    Gregson, C L

    2011-04-01

    High bone mineral density on routine dual energy X-ray absorptiometry (DXA) may indicate an underlying skeletal dysplasia. Two hundred fifty-eight individuals with unexplained high bone mass (HBM), 236 relatives (41% with HBM) and 58 spouses were studied. Cases could not float, had mandible enlargement, extra bone, broad frames, larger shoe sizes and increased body mass index (BMI). HBM cases may harbour an underlying genetic disorder. INTRODUCTION: High bone mineral density is a sporadic incidental finding on routine DXA scanning of apparently asymptomatic individuals. Such individuals may have an underlying skeletal dysplasia, as seen in LRP5 mutations. We aimed to characterize unexplained HBM and determine the potential for an underlying skeletal dysplasia. METHODS: Two hundred fifty-eight individuals with unexplained HBM (defined as L1 Z-score ≥ +3.2 plus total hip Z-score ≥ +1.2, or total hip Z-score ≥ +3.2) were recruited from 15 UK centres, by screening 335,115 DXA scans. Unexplained HBM affected 0.181% of DXA scans. Next 236 relatives were recruited of whom 94 (41%) had HBM (defined as L1 Z-score + total hip Z-score ≥ +3.2). Fifty-eight spouses were also recruited together with the unaffected relatives as controls. Phenotypes of cases and controls, obtained from clinical assessment, were compared using random-effects linear and logistic regression models, clustered by family, adjusted for confounders, including age and sex. RESULTS: Individuals with unexplained HBM had an excess of sinking when swimming (7.11 [3.65, 13.84], p < 0.001; adjusted odds ratio with 95% confidence interval shown), mandible enlargement (4.16 [2.34, 7.39], p < 0.001), extra bone at tendon\\/ligament insertions (2.07 [1.13, 3.78], p = 0.018) and broad frame (3.55 [2.12, 5.95], p < 0.001). HBM cases also had a larger shoe size (mean difference 0.4 [0.1, 0.7] UK sizes, p = 0.009) and increased BMI (mean difference 2.2 [1.3, 3.1] kg\\/m(2

  12. Protein model discrimination using mutational sensitivity derived from deep sequencing.

    Science.gov (United States)

    Adkar, Bharat V; Tripathi, Arti; Sahoo, Anusmita; Bajaj, Kanika; Goswami, Devrishi; Chakrabarti, Purbani; Swarnkar, Mohit K; Gokhale, Rajesh S; Varadarajan, Raghavan

    2012-02-08

    A major bottleneck in protein structure prediction is the selection of correct models from a pool of decoys. Relative activities of ∼1,200 individual single-site mutants in a saturation library of the bacterial toxin CcdB were estimated by determining their relative populations using deep sequencing. This phenotypic information was used to define an empirical score for each residue (RankScore), which correlated with the residue depth, and identify active-site residues. Using these correlations, ∼98% of correct models of CcdB (RMSD ≤ 4Å) were identified from a large set of decoys. The model-discrimination methodology was further validated on eleven different monomeric proteins using simulated RankScore values. The methodology is also a rapid, accurate way to obtain relative activities of each mutant in a large pool and derive sequence-structure-function relationships without protein isolation or characterization. It can be applied to any system in which mutational effects can be monitored by a phenotypic readout. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    Science.gov (United States)

    Devi, Y. D.; Kota, V. K. B.

    1993-07-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.

  14. Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example

    International Nuclear Information System (INIS)

    Devi, Y.D.; Kota, V.K.B.

    1993-01-01

    A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150 Nd

  15. Benefits of multidisciplinary collaboration for earthquake casualty estimation models: recent case studies

    Science.gov (United States)

    So, E.

    2010-12-01

    Earthquake casualty loss estimation, which depends primarily on building-specific casualty rates, has long suffered from a lack of cross-disciplinary collaboration in post-earthquake data gathering. An increase in our understanding of what contributes to casualties in earthquakes involve coordinated data-gathering efforts amongst disciplines; these are essential for improved global casualty estimation models. It is evident from examining past casualty loss models and reviewing field data collected from recent events, that generalized casualty rates cannot be applied globally for different building types, even within individual countries. For a particular structure type, regional and topographic building design effects, combined with variable material and workmanship quality all contribute to this multi-variant outcome. In addition, social factors affect building-specific casualty rates, including social status and education levels, and human behaviors in general, in that they modify egress and survivability rates. Without considering complex physical pathways, loss models purely based on historic casualty data, or even worse, rates derived from other countries, will be of very limited value. What’s more, as the world’s population, housing stock, and living and cultural environments change, methods of loss modeling must accommodate these variables, especially when considering casualties. To truly take advantage of observed earthquake losses, not only do damage surveys need better coordination of international and national reconnaissance teams, but these teams must integrate difference areas of expertise including engineering, public health and medicine. Research is needed to find methods to achieve consistent and practical ways of collecting and modeling casualties in earthquakes. International collaboration will also be necessary to transfer such expertise and resources to the communities in the cities which most need it. Coupling the theories and findings from

  16. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  17. A simplified model for the estimation of energy production of PV systems

    International Nuclear Information System (INIS)

    Aste, Niccolò; Del Pero, Claudio; Leonforte, Fabrizio; Manfren, Massimiliano

    2013-01-01

    The potential of solar energy is far higher than any other renewable source, although several limits exist. In detail the fundamental factors that must be analyzed by investors and policy makers are the cost-effectiveness and the production of PV power plants, respectively, for the decision of investment schemes and energy policy strategies. Tools suitable to be used even by non-specialists, are therefore becoming increasingly important. Many research and development effort have been devoted to this goal in recent years. In this study, a simplified model for PV annual production estimation that can provide results with a level of accuracy comparable with the more sophisticated simulation tools from which it derives is fundamental data. The main advantage of the presented model is that it can be used by virtually anyone, without requiring a specific field expertise. The inherent limits of the model are related to its empirical base, but the methodology presented can be effectively reproduced in the future with a different spectrum of data in order to assess, for example, the effect of technological evolution on the overall performance of PV power generation or establishing performance benchmarks for a much larger variety kinds of PV plants and technologies. - Highlights: • We have analyzed the main methods for estimating the electricity production of photovoltaic systems. • We simulated the same system with two different software in different European locations and estimated the electric production. • We have studied the main losses of a plant PV. • We provide a simplified model to estimate the electrical production of any PV system well designed. • We validated the data obtained by the proposed model with experimental data from three PV systems

  18. Establishing the estimation model on radiation level at the ambience of 60Co radiotherapy treatment room based on NCRP REPORT No.151

    International Nuclear Information System (INIS)

    Yang Haiyou; Liu Liping; Liang Yueqin; Yu Shui

    2009-01-01

    Objective: To establish the estimation model to evaluate the radiation level at the ambience of 60 Co radiotherapy treatment room. Methods: The estimation model derives from NCRP REPORT No.151- S tructural Shielding Design and Evaluation for Megavoltage X-and Gamma-Ray Radiotherapy Facilities b y making appropriate adjustment, which presents the calculation methods on radiation level at the ambience of megavoltage medical electron linear accelerator treatment room. Results: The application scope of estimation model from NCRP REPORT No.151 is extended to γ-radiotherapy facilities, and it can be regarded as a new model for calculating the radiation level at the ambience of 60 Co radiotherapy treatment room. Conclusion: The estimation model has certain reference value to evaluate the radiation level at the ambience of 60 Co radiotherapy treatment room. (authors)

  19. State-Space Modelling of Loudspeakers using Fractional Derivatives

    DEFF Research Database (Denmark)

    King, Alexander Weider; Agerkvist, Finn T.

    2015-01-01

    This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response of a fractio......This work investigates the use of fractional order derivatives in modeling moving-coil loudspeakers. A fractional order state-space solution is developed, leading the way towards incorporating nonlinearities into a fractional order system. The method is used to calculate the response...... of a fractional harmonic oscillator, representing the mechanical part of a loudspeaker, showing the effect of the fractional derivative and its relationship to viscoelasticity. Finally, a loudspeaker model with a fractional order viscoelastic suspension and fractional order voice coil is fit to measurement data...

  20. Analysis of Drude model using fractional derivatives without singular kernels

    Directory of Open Access Journals (Sweden)

    Jiménez Leonardo Martínez

    2017-11-01

    Full Text Available We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF, and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  1. Parameter and State Estimator for State Space Models

    Directory of Open Access Journals (Sweden)

    Ruifeng Ding

    2014-01-01

    Full Text Available This paper proposes a parameter and state estimator for canonical state space systems from measured input-output data. The key is to solve the system state from the state equation and to substitute it into the output equation, eliminating the state variables, and the resulting equation contains only the system inputs and outputs, and to derive a least squares parameter identification algorithm. Furthermore, the system states are computed from the estimated parameters and the input-output data. Convergence analysis using the martingale convergence theorem indicates that the parameter estimates converge to their true values. Finally, an illustrative example is provided to show that the proposed algorithm is effective.

  2. Estimating Gestational Age With Sonography: Regression-Derived Formula Versus the Fetal Biometric Average.

    Science.gov (United States)

    Cawyer, Chase R; Anderson, Sarah B; Szychowski, Jeff M; Neely, Cherry; Owen, John

    2018-03-01

    To compare the accuracy of a new regression-derived formula developed from the National Fetal Growth Studies data to the common alternative method that uses the average of the gestational ages (GAs) calculated for each fetal biometric measurement (biparietal diameter, head circumference, abdominal circumference, and femur length). This retrospective cross-sectional study identified nonanomalous singleton pregnancies that had a crown-rump length plus at least 1 additional sonographic examination with complete fetal biometric measurements. With the use of the crown-rump length to establish the referent estimated date of delivery, each method's (National Institute of Child Health and Human Development regression versus Hadlock average [Radiology 1984; 152:497-501]), error at every examination was computed. Error, defined as the difference between the crown-rump length-derived GA and each method's predicted GA (weeks), was compared in 3 GA intervals: 1 (14 weeks-20 weeks 6 days), 2 (21 weeks-28 weeks 6 days), and 3 (≥29 weeks). In addition, the proportion of each method's examinations that had errors outside prespecified (±) day ranges was computed by using odds ratios. A total of 16,904 sonograms were identified. The overall and prespecified GA range subset mean errors were significantly smaller for the regression compared to the average (P < .01), and the regression had significantly lower odds of observing examinations outside the specified range of error in GA intervals 2 (odds ratio, 1.15; 95% confidence interval, 1.01-1.31) and 3 (odds ratio, 1.24; 95% confidence interval, 1.17-1.32) than the average method. In a contemporary unselected population of women dated by a crown-rump length-derived GA, the National Institute of Child Health and Human Development regression formula produced fewer estimates outside a prespecified margin of error than the commonly used Hadlock average; the differences were most pronounced for GA estimates at 29 weeks and later.

  3. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  4. Estimating species – area relationships by modeling abundance and frequency subject to incomplete sampling

    Science.gov (United States)

    Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio

    2016-01-01

    Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied

  5. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1982-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation or ingestion of uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined

  6. Models for estimating the radiation hazards of uranium mines

    International Nuclear Information System (INIS)

    Wise, K.N.

    1990-01-01

    Hazards to the health of workers in uranium mines derive from the decay products of radon and from uranium and its descendants. Radon daughters in mine atmospheres are either attached to aerosols or exist as free atoms and their physical state determines in which part of the lung the daughters deposit. The factors which influence the proportions of radon daughters attached to aerosols, their deposition in the lung and the dose received by the cells in lung tissue are discussed. The estimation of dose to tissue from inhalation of ingestion or uranium and daughters is based on a different set of models which have been applied in recent ICRP reports. The models used to describe the deposition of particulates, their movement in the gut and their uptake by organs, which form the basis for future limits on the concentration of uranium and daughters in air or on their intake with food, are outlined. 34 refs., 12 tabs., 9 figs

  7. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  8. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  9. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  10. Robust human body model injury prediction in simulated side impact crashes.

    Science.gov (United States)

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  11. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  12. Estimating Forest fAPAR from Multispectral Landsat-8 Data Using the Invertible Forest Reflectance Model INFORM

    Directory of Open Access Journals (Sweden)

    Huili Yuan

    2015-06-01

    Full Text Available The estimation of the Fraction of Absorbed Photosynthetically Active Radiation in forests (forest fAPAR from multi-spectral Landsat-8 data is investigated in this paper using a physically based radiative transfer model (Invertible Forest Reflectance Model, INFORM combined with an inversion strategy based on artificial neural nets (ANN. To derive the forest fAPAR for the Dabie mountain test site in China in 30 m spatial resolution (size approximately 3000 km2, a database of forest canopy spectral reflectances was simulated with INFORM taking into account structural variables such as leaf area index (LAI, crown coverage and stem density as well as leaf composition. To establish the relationship between forest fAPAR and the reflectance modeled by INFORM, a logarithmic relationship between LAI and fAPAR was used previously established using on-site field measurements. On this basis, predictive models between Landsat-8 reflectance and fAPAR were established using an artificial neural network. After calibrating INFORM for the test site, forty-two forest stands were used to validate the performance of the method. The results show that spectral signatures modeled by INFORM correspond reasonably well with the forest canopy reflectance spectra derived from Landsat data. Deviations increase with increasing angle between surface normal of the hilly terrain and sun incidence. The comparison of estimated and measured fAPAR (R2 = 0.47, RMSE = 0.11 demonstrates that INFORM can be inverted using neural nets to provide acceptable estimates of forest fAPAR. The accuracy of the predictions increased significantly when excluding pixels located in very steep terrain. This demonstrates that the applied topographic correction was not sufficiently accurate and should be improved for making optimum use of radiative transfer models such as INFORM.

  13. Derivative pricing with liquidity risk: Theory and evidence from the credit default swap market

    NARCIS (Netherlands)

    Bongaerts, D.; de Jong, F.; Driessen, J.

    2008-01-01

    We derive a theoretical asset-pricing model for derivative contracts that allows for expected liquidity and liquidity risk, and estimate this model for the market of credit default swaps (CDS). Our model extends the LCAPM of Acharya and Pedersen (2005) to a setting with derivative instruments and

  14. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  15. Modeling of apparent activation energy and lifetime estimation in NAND flash memory

    International Nuclear Information System (INIS)

    Lee, Kyunghwan; Shin, Hyungcheol; Kang, Myounggon; Hwang, Yuchul

    2015-01-01

    Misunderstanding apparent activation energy (E aa ) can cause serious error in lifetime predictions. In this paper, the E aa is investigated for sub 20 nm NAND flash memory. In a high-temperature (HT) regime, the interface trap (N it ) recovery mechanism has the greatest impact on the charge loss. However, the values of E aa and E a(Nit) have a wide difference. Also, the lifetime of the device cannot be estimated by the Arrhenius model due to the E aa roll-off behavior. For the first time, we reveal the origin of abnormal characteristics on E aa and derive a mathematical formula for E aa as a function of each E a(mechanism) in NAND flash memory. Using the proposed E aa equation, the accurate lifetime for the device is estimated. (paper)

  16. Reference Evapotranspiration Retrievals from a Mesoscale Model Based Weather Variables for Soil Moisture Deficit Estimation

    Directory of Open Access Journals (Sweden)

    Prashant K. Srivastava

    2017-10-01

    Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.

  17. The relative pose estimation of aircraft based on contour model

    Science.gov (United States)

    Fu, Tai; Sun, Xiangyi

    2017-02-01

    This paper proposes a relative pose estimation approach based on object contour model. The first step is to obtain a two-dimensional (2D) projection of three-dimensional (3D)-model-based target, which will be divided into 40 forms by clustering and LDA analysis. Then we proceed by extracting the target contour in each image and computing their Pseudo-Zernike Moments (PZM), thus a model library is constructed in an offline mode. Next, we spot a projection contour that resembles the target silhouette most in the present image from the model library with reference of PZM; then similarity transformation parameters are generated as the shape context is applied to match the silhouette sampling location, from which the identification parameters of target can be further derived. Identification parameters are converted to relative pose parameters, in the premise that these values are the initial result calculated via iterative refinement algorithm, as the relative pose parameter is in the neighborhood of actual ones. At last, Distance Image Iterative Least Squares (DI-ILS) is employed to acquire the ultimate relative pose parameters.

  18. [Monograph for N-Methyl-pyrrolidone (NMP) and human biomonitoring values for the metabolites 5-Hydroxy-NMP and 2-Hydroxy-N-methylsuccinimide].

    Science.gov (United States)

    2015-10-01

    1-Methyl-pyrrolidone (NMP) is used as a solvent in many technical applications. The general population may be exposed to NMP from the use as ingredient in paint and graffiti remover, indoors also from use in paints and carpeting. Because of developmental toxic effects, the use of NMP in consumer products in the EU is regulated. The developmental effects accompanied by weak maternally toxic effects in animal experiments are considered as the critical effects by the German HBM Commission. Based on these effects, HBM-I values of 10 mg/l urine for children and of 15 mg/l for adults, respectively, were derived for the metabolites 5-Hydroxy-NMP and 2-Hydroxy-N-methylsuccinimide. HBM-II-values were set to 30 mg/l urine for children and 50 mg/l for adults, respectively. Because of similar effects of the structural analogue 1-ethyl-2-pyrrolidone (NEP), the possible mixed exposure to both compounds has to be taken into account when evaluating the total burden.

  19. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    Science.gov (United States)

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist

  20. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  1. Turbulence modeling with fractional derivatives: Derivation from first principles and initial results

    Science.gov (United States)

    Epps, Brenden; Cushman-Roisin, Benoit

    2017-11-01

    Fluid turbulence is an outstanding unsolved problem in classical physics, despite 120+ years of sustained effort. Given this history, we assert that a new mathematical framework is needed to make a transformative breakthrough. This talk offers one such framework, based upon kinetic theory tied to the statistics of turbulent transport. Starting from the Boltzmann equation and ``Lévy α-stable distributions'', we derive a turbulence model that expresses the turbulent stresses in the form of a fractional derivative, where the fractional order is tied to the transport behavior of the flow. Initial results are presented herein, for the cases of Couette-Poiseuille flow and 2D boundary layers. Among other results, our model is able to reproduce the logarithmic Law of the Wall in shear turbulence.

  2. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    International Nuclear Information System (INIS)

    Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun; Joy, Minu

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m eff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation ν t and change in spectral index for scalar perturbation ν st to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n s =0.96 by having a non-zero value of effective mass of the inflaton field m 2 eff /H 2 . The analysis with WP + Planck likelihood shows a non-zero detection of m 2 eff /H 2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m 2 eff /H 2  = −0.0237 ± 0.0135 which is consistent with zero

  3. Vulnerable Derivatives and Good Deal Bounds: A Structural Model

    DEFF Research Database (Denmark)

    Murgoci, Agatha

    2013-01-01

    We price vulnerable derivatives -- i.e. derivatives where the counterparty may default. These are basically the derivatives traded on the over-the-counter (OTC) markets. Default is modeled in a structural framework. The technique employed for pricing is good deal bounds (GDBs). The method imposes...

  4. Global distribution of urban parameters derived from high-resolution global datasets for weather modelling

    Science.gov (United States)

    Kawano, N.; Varquez, A. C. G.; Dong, Y.; Kanda, M.

    2016-12-01

    Numerical model such as Weather Research and Forecasting model coupled with single-layer Urban Canopy Model (WRF-UCM) is one of the powerful tools to investigate urban heat island. Urban parameters such as average building height (Have), plain area index (λp) and frontal area index (λf), are necessary inputs for the model. In general, these parameters are uniformly assumed in WRF-UCM but this leads to unrealistic urban representation. Distributed urban parameters can also be incorporated into WRF-UCM to consider a detail urban effect. The problem is that distributed building information is not readily available for most megacities especially in developing countries. Furthermore, acquiring real building parameters often require huge amount of time and money. In this study, we investigated the potential of using globally available satellite-captured datasets for the estimation of the parameters, Have, λp, and λf. Global datasets comprised of high spatial resolution population dataset (LandScan by Oak Ridge National Laboratory), nighttime lights (NOAA), and vegetation fraction (NASA). True samples of Have, λp, and λf were acquired from actual building footprints from satellite images and 3D building database of Tokyo, New York, Paris, Melbourne, Istanbul, Jakarta and so on. Regression equations were then derived from the block-averaging of spatial pairs of real parameters and global datasets. Results show that two regression curves to estimate Have and λf from the combination of population and nightlight are necessary depending on the city's level of development. An index which can be used to decide which equation to use for a city is the Gross Domestic Product (GDP). On the other hand, λphas less dependence on GDP but indicated a negative relationship to vegetation fraction. Finally, a simplified but precise approximation of urban parameters through readily-available, high-resolution global datasets and our derived regressions can be utilized to estimate a

  5. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    Science.gov (United States)

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  6. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  7. High-resolution model for estimating the economic and policy implications of agricultural soil salinization in California

    Science.gov (United States)

    Welle, Paul D.; Mauter, Meagan S.

    2017-09-01

    This work introduces a generalizable approach for estimating the field-scale agricultural yield losses due to soil salinization. When integrated with regional data on crop yields and prices, this model provides high-resolution estimates for revenue losses over large agricultural regions. These methods account for the uncertainty inherent in model inputs derived from satellites, experimental field data, and interpreted model results. We apply this method to estimate the effect of soil salinity on agricultural outputs in California, performing the analysis with both high-resolution (i.e. field scale) and low-resolution (i.e. county-scale) data sources to highlight the importance of spatial resolution in agricultural analysis. We estimate that soil salinity reduced agricultural revenues by 3.7 billion (1.7-7.0 billion) in 2014, amounting to 8.0 million tons of lost production relative to soil salinities below the crop-specific thresholds. When using low-resolution data sources, we find that the costs of salinization are underestimated by a factor of three. These results highlight the need for high-resolution data in agro-environmental assessment as well as the challenges associated with their integration.

  8. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  9. [Health promotion. Instrument development for the application of the theory of planned behavior].

    Science.gov (United States)

    Lee, Y O

    1993-01-01

    The purpose of this article is to describe operationalization of the Theory of Planned Behavior (TPB). The quest to understand determinants of health behaviors has intensified as evidence accumulates concerning the impact of personal behavior on health. The majority of theory-based research has used the Health Belief Model(HBM). The HBM components have had limited success in explaining health-related behaviors. There are several advantages of the TPB over the HBM. TPB is an expansion of the Theory of Reasoned Action(TRA) with the addition of the construct, perceived behavioral control. The revised model has been shown to yield greater explanatory power than the original TRA for goal-directed behaviors. The process of TPB instrument development was described, using example form the study of smoking cessation behavior in military smokers. It was followed by a discussion of reliability and validity issues in operationalizing the TPB. The TPB is a useful model for understanding and predicting health-related behaviors when carefully operationalized. The model holds promise in the development of prescriptive nursing approaches.

  10. Modeling ramp-hold indentation measurements based on Kelvin-Voigt fractional derivative model

    Science.gov (United States)

    Zhang, Hongmei; zhe Zhang, Qing; Ruan, Litao; Duan, Junbo; Wan, Mingxi; Insana, Michael F.

    2018-03-01

    Interpretation of experimental data from micro- and nano-scale indentation testing is highly dependent on the constitutive model selected to relate measurements to mechanical properties. The Kelvin-Voigt fractional derivative model (KVFD) offers a compact set of viscoelastic features appropriate for characterizing soft biological materials. This paper provides a set of KVFD solutions for converting indentation testing data acquired for different geometries and scales into viscoelastic properties of soft materials. These solutions, which are mostly in closed-form, apply to ramp-hold relaxation, load-unload and ramp-load creep-testing protocols. We report on applications of these model solutions to macro- and nano-indentation testing of hydrogels, gastric cancer cells and ex vivo breast tissue samples using an atomic force microscope (AFM). We also applied KVFD models to clinical ultrasonic breast data using a compression plate as required for elasticity imaging. Together the results show that KVFD models fit a broad range of experimental data with a correlation coefficient typically R 2  >  0.99. For hydrogel samples, estimation of KVFD model parameters from test data using spherical indentation versus plate compression as well as ramp relaxation versus load-unload compression all agree within one standard deviation. Results from measurements made using macro- and nano-scale indentation agree in trend. For gastric cell and ex vivo breast tissue measurements, KVFD moduli are, respectively, 1/3-1/2 and 1/6 of the elasticity modulus found from the Sneddon model. In vivo breast tissue measurements yield model parameters consistent with literature results. The consistency of results found for a broad range of experimental parameters suggest the KVFD model is a reliable tool for exploring intrinsic features of the cell/tissue microenvironments.

  11. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  12. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  13. Housing land transaction data and structural econometric estimation of preference parameters for urban economic simulation models.

    Science.gov (United States)

    Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles

    2015-12-01

    This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities.

  14. Housing land transaction data and structural econometric estimation of preference parameters for urban economic simulation models

    Science.gov (United States)

    Caruso, Geoffrey; Cavailhès, Jean; Peeters, Dominique; Thomas, Isabelle; Frankhauser, Pierre; Vuidel, Gilles

    2015-01-01

    This paper describes a dataset of 6284 land transactions prices and plot surfaces in 3 medium-sized cities in France (Besançon, Dijon and Brest). The dataset includes road accessibility as obtained from a minimization algorithm, and the amount of green space available to households in the neighborhood of the transactions, as evaluated from a land cover dataset. Further to the data presentation, the paper describes how these variables can be used to estimate the non-observable parameters of a residential choice function explicitly derived from a microeconomic model. The estimates are used by Caruso et al. (2015) to run a calibrated microeconomic urban growth simulation model where households are assumed to trade-off accessibility and local green space amenities. PMID:26958606

  15. Hamiltonian derivation of the nonhydrostatic pressure-coordinate model

    Science.gov (United States)

    Salmon, Rick; Smith, Leslie M.

    1994-07-01

    In 1989, the Miller-Pearce (MP) model for nonhydrostatic fluid motion governed by equations written in pressure coordinates was extended by removing the prescribed reference temperature, T(sub s)(p), while retaining the conservation laws and other desirable properties. It was speculated that this extension of the MP model had a Hamiltonian structure and that a slick derivation of the Ertel property could be constructed if the relevant Hamiltonian were known. In this note, the extended equations are derived using Hamilton's principle. The potential vorticity law arises from the usual particle-relabeling symmetry of the Lagrangian, and even the absence of sound waves is anticipated from the fact that the pressure inside the free energy G(p, theta) in the derived equation is hydrostatic and thus G is insensitive to local pressure fluctuations. The model extension is analogous to the semigeostrophic equations for nearly geostrophic flow, which do not incorporate a prescribed reference state, while the earlier MP model is analogous to the quasigeostrophic equations, which become highly inaccurate when the flow wanders from a prescribed state with nearly flat isothermal surfaces.

  16. A-Train Aerosol Observations Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-Sky Estimates

    Science.gov (United States)

    Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; hide

    2014-01-01

    We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  17. Parameter study for child injury mitigation in near-side impacts through FE simulations.

    Science.gov (United States)

    Andersson, Marianne; Pipkorn, Bengt; Lövsund, Per

    2012-01-01

    The objective of this study is to investigate the effects of crash-related car parameters on head and chest injury measures for 3- and 12-year-old children in near-side impacts. The evaluation was made using a model of a complete passenger car that was impacted laterally by a barrier. The car model was validated in 2 crash conditions: the Insurance Institute for Highway Safety (IIHS) and the US New Car Assessment Program (NCAP) side impact tests. The Small Side Impact Dummy (SID-IIs) and the human body model 3 (HBM3) (Total HUman Model for Safety [THUMS] 3-year-old) finite element models were used for the parametric investigation (HBM3 on a booster). The car parameters were as follows: vehicle mass, side impact structure stiffness, a head air bag, a thorax-pelvis air bag, and a seat belt with pretensioner. The studied dependent variables were as follows: resultant head linear acceleration, resultant head rotational acceleration, chest viscous criterion, rib deflection, and relative velocity at head impact. The chest measurements were only considered for the SID-IIs. The head air bag had the greatest effect on the head measurements for both of the occupant models. On average, it reduced the peak head linear acceleration by 54 g for the HBM3 and 78 g for the SID-IIs. The seat belt had the second greatest effect on the head measurements; the peak head linear accelerations were reduced on average by 39 g (HBM3) and 44 g (SID-IIs). The high stiffness side structure increased the SID-IIs' head acceleration, whereas it had marginal effect on the HBM3. The vehicle mass had a marginal effect on SID-IIs' head accelerations, whereas the lower vehicle mass caused 18 g higher head acceleration for HBM3 and the greatest rotational acceleration. The thorax-pelvis air bag, vehicle mass, and seat belt pretensioner affected the chest measurements the most. The presence of a thorax-pelvis air bag, high vehicle mass, and a seat belt pretensioner all reduced the chest viscous criterion

  18. A guide for estimating dynamic panel models: the macroeconomics models specifiness

    International Nuclear Information System (INIS)

    Coletta, Gaetano

    2005-10-01

    The aim of this paper is to review estimators for dynamic panel data models, a topic in which the interest has grown recently. As a consequence 01 this late interest, different estimation techniques have been proposed in the last few years and, given the last development of the subject, there is still a lack 01 a comprehensive guide for panel data applications, and for macroeconomics panel data models in particular. Finally, we also provide some indications about the Stata software commands to estimate dynamic panel data models with the techniques illustrated in the paper [it

  19. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    Science.gov (United States)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate

  20. The Effectiveness of an Educational Intervention Based on the Health Belief Model in the Empowerment of Stockbreeders Against High-Risk Behaviors Associated with Brucellosis

    Directory of Open Access Journals (Sweden)

    Vahid Babaei

    2014-12-01

    Full Text Available Background and Objectives: Brucellosis is among the most common zoonotic diseases. Educational programs can be effective in the prevention of this disease in humans. The present study was conducted to assess the effectiveness of an educational intervention based on the Health Belief Model (HBM in the empowerment of stockbreeders against high risk behaviors associated with brucellosis in Charuymaq county, East Azerbaijan. Materials and Methods: The present quasi-experimental study was conducted in 2014 in Charuymaq county. A total of 200 people selected through stratified random sampling participated in the study. Data were collected using a researcher-designed questionnaire including items on participants' demographic information, knowledge and the HBM constructs. Training sessions were then designed and held for the intervention group. Three months after the intervention was held, data were collected from both groups and then analyzed using descriptive statistics including the Mann-Whitney U test and the Wilcoxon test. Results: The mean scores obtained for knowledge, HBM constructs (perceived susceptibility, severity, barriers and benefits and self-efficacy and brucellosis preventive behaviors showed no significant differences between the two groups before the intervention however, after the educational intervention, significant differences were observed between the mean scores obtained by the intervention group and the control group (P<0.05. Conclusion: The cooperation of charismatic individuals with intervention programs and the use of education theories can be more effective in modifying high-risk behaviors these programs should therefore be widely implemented across the country.

  1. Modeling neurodegenerative diseases with patient-derived induced pluripotent cells

    DEFF Research Database (Denmark)

    Poon, Anna; Zhang, Yu; Chandrasekaran, Abinaya

    2017-01-01

    patient-specific induced pluripotent stem cells (iPSCs) and isogenic controls generated using CRISPR-Cas9 mediated genome editing. The iPSCs are self-renewable and capable of being differentiated into the cell types affected by the diseases. These in vitro models based on patient-derived iPSCs provide...... the possibilities of generating three-dimensional (3D) models using the iPSCs-derived cells and compare their advantages and disadvantages to conventional two-dimensional (2D) models....

  2. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  3. Deriving simulators for hybrid Chi models

    NARCIS (Netherlands)

    Beek, van D.A.; Man, K.L.; Reniers, M.A.; Rooda, J.E.; Schiffelers, R.R.H.

    2006-01-01

    The hybrid Chi language is formalism for modeling, simulation and verification of hybrid systems. The formal semantics of hybrid Chi allows the definition of provably correct implementations for simulation, verification and realtime control. This paper discusses the principles of deriving an

  4. A One Line Derivation of EGARCH

    Directory of Open Access Journals (Sweden)

    Michael McAleer

    2014-06-01

    Full Text Available One of the most popular univariate asymmetric conditional volatility models is the exponential GARCH (or EGARCH specification. In addition to asymmetry, which captures the different effects on conditional volatility of positive and negative effects of equal magnitude, EGARCH can also accommodate leverage, which is the negative correlation between returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi- maximum likelihood estimator of the EGARCH parameters are not available under general conditions, but rather only for special cases under highly restrictive and unverifiable conditions. It is often argued heuristically that the reason for the lack of general statistical properties arises from the presence in the model of an absolute value of a function of the parameters, which does not permit analytical derivatives, and hence does not permit (quasi- maximum likelihood estimation. It is shown in this paper for the non-leverage case that: (1 the EGARCH model can be derived from a random coefficient complex nonlinear moving average (RCCNMA process; and (2 the reason for the lack of statistical properties of the estimators of EGARCH under general conditions is that the stationarity and invertibility conditions for the RCCNMA process are not known.

  5. Remarks on the microscopic derivation of the collective model

    International Nuclear Information System (INIS)

    Toyoda, T.; Wildermuth, K.

    1984-01-01

    The rotational part of the phenomenological collective model of Bohr and Mottelson and others is derived microscopically, starting with the Schrodinger equation written in projection form and introducing a new set of 'relative Euler angles'. In order to derive the local Schrodinger equation of the collective model, it is assumed that the intrinsic wave functions give strong peaking properties to the overlapping kernels

  6. Hamiltonian derivation of a gyrofluid model for collisionless magnetic reconnection

    International Nuclear Information System (INIS)

    Tassi, E

    2014-01-01

    We consider a simple electromagnetic gyrokinetic model for collisionless plasmas and show that it possesses a Hamiltonian structure. Subsequently, from this model we derive a two-moment gyrofluid model by means of a procedure which guarantees that the resulting gyrofluid model is also Hamiltonian. The first step in the derivation consists of imposing a generic fluid closure in the Poisson bracket of the gyrokinetic model, after expressing such bracket in terms of the gyrofluid moments. The constraint of the Jacobi identity, which every Poisson bracket has to satisfy, selects then what closures can lead to a Hamiltonian gyrofluid system. For the case at hand, it turns out that the only closures (not involving integro/differential operators or an explicit dependence on the spatial coordinates) that lead to a valid Poisson bracket are those for which the second order parallel moment, independently for each species, is proportional to the zero order moment. In particular, if one chooses an isothermal closure based on the equilibrium temperatures and derives accordingly the Hamiltonian of the system from the Hamiltonian of the parent gyrokinetic model, one recovers a known Hamiltonian gyrofluid model for collisionless reconnection. The proposed procedure, in addition to yield a gyrofluid model which automatically conserves the total energy, provides also, through the resulting Poisson bracket, a way to derive further conservation laws of the gyrofluid model, associated with the so called Casimir invariants. We show that a relation exists between Casimir invariants of the gyrofluid model and those of the gyrokinetic parent model. The application of such Hamiltonian derivation procedure to this two-moment gyrofluid model is a first step toward its application to more realistic, higher-order fluid or gyrofluid models for tokamaks. It also extends to the electromagnetic gyrokinetic case, recent applications of the same procedure to Vlasov and drift- kinetic systems

  7. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  8. Review of the state of the art of human biomonitoring for chemical substances and its application to human exposure assessment for food safety

    DEFF Research Database (Denmark)

    Choi, Judy; Mørck, Thit Aarøe; Polcher, Alexandra

    2015-01-01

    Human biomonitoring (HBM) measures the levels of substances in body fluids and tissues. Many countries have conducted HBM studies, yet little is known about its application towards chemical risk assessment, particularly in relation to food safety. Therefore a literature search was performed...... in several databases and conference proceedings for 2002 – 2014. Definitions of HBM and biomarkers, HBM techniques and requirements, and the possible application to the different steps of risk assessment were described. The usefulness of HBM for exposure assessment of chemical substances from food source...... safety areas (namely exposure assessment), and for the implementation of a systematic PMM approach. But further work needs to be done to improve usability. Major deficits are the lack of HBM guidance values on a considerable number of substance groups, for which health based guidance values (HBGVs) have...

  9. Derivation and analysis of a high-resolution estimate of global permafrost zonation

    Directory of Open Access Journals (Sweden)

    S. Gruber

    2012-02-01

    Full Text Available Permafrost underlies much of Earth's surface and interacts with climate, eco-systems and human systems. It is a complex phenomenon controlled by climate and (sub- surface properties and reacts to change with variable delay. Heterogeneity and sparse data challenge the modeling of its spatial distribution. Currently, there is no data set to adequately inform global studies of permafrost. The available data set for the Northern Hemisphere is frequently used for model evaluation, but its quality and consistency are difficult to assess. Here, a global model of permafrost extent and dataset of permafrost zonation are presented and discussed, extending earlier studies by including the Southern Hemisphere, by consistent data and methods, by attention to uncertainty and scaling. Established relationships between air temperature and the occurrence of permafrost are re-formulated into a model that is parametrized using published estimates. It is run with a high-resolution (<1 km global elevation data and air temperatures based on the NCAR-NCEP reanalysis and CRU TS 2.0. The resulting data provide more spatial detail and a consistent extrapolation to remote regions, while aggregated values resemble previous studies. The estimated uncertainties affect regional patterns and aggregate number, and provide interesting insight. The permafrost area, i.e. the actual surface area underlain by permafrost, north of 60° S is estimated to be 13–18 × 106 km2 or 9–14 % of the exposed land surface. The global permafrost area including Antarctic and sub-sea permafrost is estimated to be 16–21 × 106 km2. The global permafrost region, i.e. the exposed land surface below which some permafrost can be expected, is estimated to be 22 ± 3 × 106 km2. A large proportion of this exhibits considerable topography and spatially-discontinuous permafrost, underscoring the importance of attention to scaling issues

  10. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  11. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  12. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  13. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  14. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  15. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  16. Implementation of a subcanopy solar radiation model on a forested headwater basin in the Southern Appalachians to estimate riparian canopy density and stream insolation for stream temperature models

    Science.gov (United States)

    Belica, L.; Petras, V.; Iiames, J. S., Jr.; Caldwell, P.; Mitasova, H.; Nelson, S. A. C.

    2016-12-01

    Water temperature is a key aspect of water quality and understanding how the thermal regimes of forested headwater streams may change in response to climatic and land cover changes is increasingly important to scientists and resource managers. In recent years, the forested mountain watersheds of the Southeastern U.S. have experienced changing climatic patterns as well as the loss of a keystone riparian tree species and anticipated hydrologic responses include lower summer stream flows and decreased stream shading. Solar radiation is the main source of thermal energy to streams and a key parameter in heat-budget models of stream temperature; a decrease in flow volume combined with a reduction in stream shading during summer have the potential to increase stream temperatures. The high spatial variability of forest canopies and the high spatio-temporal variability in sky conditions make estimating the solar radiation reaching small forested headwater streams difficult. The Subcanopy Solar Radiation Model (SSR) (Bode et al. 2014) is a GIS model that generates high resolution, spatially explicit estimates of solar radiation by incorporating topographic and vegetative shading with a light penetration index derived from leaf-on airborne LIDAR data. To evaluate the potential of the SSR model to provide estimates of stream insolation to parameterize heat-budget models, it was applied to the Coweeta Basin in the Southern Appalachians using airborne LIDAR (NCALM 2009, 1m resolution). The LIDAR derived canopy characteristics were compared to current hyperspectral images of the canopy for changes and the SSR estimates of solar radiation were compared with pyranometer measurements of solar radiation at several subcanopy sites during the summer of 2016. Preliminary results indicate the SSR model was effective in identifying variations in canopy density and light penetration, especially in areas associated with road and stream corridors and tree mortality. Current LIDAR data and

  17. Evaluation of three energy balance-based evaporation models for estimating monthly evaporation for five lakes using derived heat storage changes from a hysteresis model

    NARCIS (Netherlands)

    Duan, Z.; Bastiaanssen, W.G.M.

    2017-01-01

    The heat storage changes (Qt) can be a significant component of the energy balance in lakes, and it is important to account for Qt for reasonable estimation of evaporation at monthly and finer timescales if the energy balance-based evaporation models are used. However, Qt has been often neglected in

  18. Assessing concentration uncertainty estimates from passive microwave sea ice products

    Science.gov (United States)

    Meier, W.; Brucker, L.; Miller, J. A.

    2017-12-01

    Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.

  19. Effect of the Absorbed Photosynthetically Active Radiation Estimation Error on Net Primary Production Estimation - A Study with MODIS FPAR and TOMS Ultraviolet Reflective Products

    International Nuclear Information System (INIS)

    Kobayashi, H.; Matsunaga, T.; Hoyano, A.

    2002-01-01

    Absorbed photosynthetically active radiation (APAR), which is defined as downward solar radiation in 400-700 nm absorbed by vegetation, is one of the significant variables for Net Primary Production (NPP) estimation from satellite data. Toward the reduction of the uncertainties in the global NPP estimation, it is necessary to clarify the APAR accuracy. In this paper, first we proposed the improved PAR estimation method based on Eck and Dye's method in which the ultraviolet (UV) reflectivity data derived from Total Ozone Mapping Spectrometer (TOMS) at the top of atmosphere were used for clouds transmittance estimation. The proposed method considered the variable effects of land surface UV reflectivity on the satellite-observed UV data. Monthly mean PAR comparisons between satellite-derived and ground-based data at various meteorological stations in Japan indicated that the improved PAR estimation method reduced the bias errors in the summer season. Assuming the relative error of the fraction of PAR (FPAR) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) to be 10%, we estimated APAR relative errors to be 10-15%. Annual NPP is calculated using APAR derived from MODIS/ FPAR and the improved PAR estimation method. It is shown that random and bias errors of annual NPP in a 1 km resolution pixel are less than 4% and 6% respectively. The APAR bias errors due to the PAR bias errors also affect the estimated total NPP. We estimated the most probable total annual NPP in Japan by subtracting the bias PAR errors. It amounts about 248 MtC/yr. Using the improved PAR estimation method, and Eck and Dye's method, total annual NPP is 4% and 9% difference from most probable value respectively. The previous intercomparison study among using fifteen NPP models4) showed that global NPP estimations among NPP models are 44.4-66.3 GtC/yr (coefficient of variation = 14%). Hence we conclude that the NPP estimation uncertainty due to APAR estimation error is small

  20. A space-time hybrid hourly rainfall model for derived flood frequency analysis

    Directory of Open Access Journals (Sweden)

    U. Haberlandt

    2008-12-01

    Full Text Available For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series.

    First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in

  1. Perceived risks of HIV/AIDS and first sexual intercourse among youth in Cape Town, South Africa.

    Science.gov (United States)

    Tenkorang, Eric Y; Rajulton, Fernando; Maticka-Tyndale, Eleanor

    2009-04-01

    The 'Health Belief Model' (HBM) identifies perception of HIV/AIDS risks, recognition of its seriousness, and knowledge about prevention as predictors of safer sexual activity. Using data from the Cape Area Panel Survey (CAPS) and hazard models, this study examines the impact of risk perception, considered the first step in HIV prevention, set within the context of the HBM and socio-economic, familial and school factors, on the timing of first sexual intercourse among youth aged 14-22 in Cape Town, South Africa. Of the HBM components, female youth who perceive their risk as 'very small' and males with higher knowledge, experience their sexual debut later than comparison groups, net of other influences. For both males and females socio-economic and familial factors also influence timing of sexual debut, confirming the need to consider the social embeddedness of this sexual behavior as well as the rational components of decision making when designing prevention programs.

  2. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  3. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  4. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  5. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  6. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge data sets at daily to annual scales (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2015-04-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over the contiguous United States (CONUS) for the period 2002-2012. This comparison effort includes satellite multi-sensor data sets (bias-adjusted TMPA 3B42, near-real-time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation data sets are compared with surface observations from the Global Historical Climatology Network-Daily (GHCN-D) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (±6%). However, differences at the RFC are more important in particular for near-real-time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near-real-time counterpart 3B42RT. However, large biases remained for 3B42 over the western USA for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in. day-1) over the Pacific Northwest. Furthermore, the conditional analysis and a contingency analysis conducted illustrated the challenge in retrieving extreme precipitation from remote sensing estimates.

  7. Integrating hydrodynamic models and COSMO-SkyMed derived products for flood damage assessment

    Science.gov (United States)

    Giuffra, Flavio; Boni, Giorgio; Pulvirenti, Luca; Pierdicca, Nazzareno; Rudari, Roberto; Fiorini, Mattia

    2015-04-01

    observe the temporal evolution of the event (e.g. the water receding). In this paper, the first outcomes of a study aiming at combining COSMO-SkyMed derived flood maps with hydrodynamic models are presented. The study is carried out within the framework of the EO-based CHange detection for Operational Flood Management (ECHO-FM) project, funded by the Italian Space Agency (ASI) as part of the research activities agreed in the cooperation between ASI and the Japan Aerospace Exploration Agency (JAXA). The flood that hit the region of Shkodër, in Albania, on January 2010, is considered as test case. The work focuses on the utility of a dense temporal series of SAR data, such as that available through CSK for this case study, used in combination with a hydrodynamic model to monitor over a long time (in the order of 3 weeks) the natural drainage of the Shkodër floodplain. It is shown that by matching the outputs of the model to SAR observations, the hydrodynamic inconsistencies in CSK estimates can be corrected.

  8. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  9. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  10. Assimilation of Remotely Sensed Soil Moisture Profiles into a Crop Modeling Framework for Reliable Yield Estimations

    Science.gov (United States)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2017-12-01

    Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the

  11. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  12. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  13. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Directory of Open Access Journals (Sweden)

    Troy W Grovenburg

    Full Text Available New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old ungulates: 76 white-tailed deer (Odocoileus virginianus in Minnesota and South Dakota, 61 mule deer (O. hemionus in California, and 37 pronghorn (Antilocapra americana in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i

  14. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Estimating the Cross-Shelf Export of Riverine Materials: Part 1. General Relationships From an Idealized Numerical Model

    Science.gov (United States)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.

  16. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  17. A multilevel model for cardiovascular disease prevalence in the US and its application to micro area prevalence estimates

    Directory of Open Access Journals (Sweden)

    Congdon Peter

    2009-01-01

    Full Text Available Abstract Background Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. Methods A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. Results To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Conclusion Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states and for

  18. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  19. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  20. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  1. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  2. The Arbitrage Pricing Model: A Pedagogic Derivation and a Spreadsheet-Based Illustration

    Directory of Open Access Journals (Sweden)

    Clarence C. Y. Kwan

    2016-05-01

    Full Text Available This paper derives, from a pedagogic perspective, the Arbitrage Pricing Model, which is an important asset pricing model in modern finance. The derivation is based on the idea that, if a self-financed investment has no risk exposures, the payoff from the investment can only be zero. Microsoft Excel plays an important pedagogic role in this paper. The Excel illustration not only helps students recognize more fully the various nuances in the model derivation, but also serves as a good starting point for students to explore on their own the relevance of the noise issue in the model derivation.

  3. Determining Optimal New Generation Satellite Derived Metrics for Accurate C3 and C4 Grass Species Aboveground Biomass Estimation in South Africa

    Directory of Open Access Journals (Sweden)

    Cletah Shoko

    2018-04-01

    Full Text Available While satellite data has proved to be a powerful tool in estimating C3 and C4 grass species Aboveground Biomass (AGB, finding an appropriate sensor that can accurately characterize the inherent variations remains a challenge. This limitation has hampered the remote sensing community from continuously and precisely monitoring their productivity. This study assessed the potential of a Sentinel 2 MultiSpectral Instrument, Landsat 8 Operational Land Imager, and WorldView-2 sensors, with improved earth imaging characteristics, in estimating C3 and C4 grasses AGB in the Cathedral Peak, South Africa. Overall, all sensors have shown considerable potential in estimating species AGB; with the use of different combinations of the derived spectral bands and vegetation indices producing better accuracies. However, WorldView-2 derived variables yielded better predictive accuracies (R2 ranging between 0.71 and 0.83; RMSEs between 6.92% and 9.84%, followed by Sentinel 2, with R2 between 0.60 and 0.79; and an RMSE 7.66% and 14.66%. Comparatively, Landsat 8 yielded weaker estimates, with R2 ranging between 0.52 and 0.71 and high RMSEs ranging between 9.07% and 19.88%. In addition, spectral bands located within the red edge (e.g., centered at 0.705 and 0.745 µm for Sentinel 2, SWIR, and NIR, as well as the derived indices, were found to be very important in predicting C3 and C4 AGB from the three sensors. The competence of these bands, especially of the free-available Landsat 8 and Sentinel 2 dataset, was also confirmed from the fusion of the datasets. Most importantly, the three sensors managed to capture and show the spatial variations in AGB for the target C3 and C4 grassland area. This work therefore provides a new horizon and a fundamental step towards C3 and C4 grass productivity monitoring for carbon accounting, forage mapping, and modelling the influence of environmental changes on their productivity.

  4. Piecewise Loglinear Estimation of Efficient Production Surfaces

    OpenAIRE

    Rajiv D. Banker; Ajay Maindiratta

    1986-01-01

    Linear programming formulations for piecewise loglinear estimation of efficient production surfaces are derived from a set of basic properties postulated for the underlying production possibility sets. Unlike the piecewise linear model of Banker, Charnes, and Cooper (Banker R. D., A. Charnes, W. W. Cooper. 1984. Models for the estimation of technical and scale inefficiencies in data envelopment analysis. Management Sci. 30 (September) 1078--1092.), this approach permits the identification of ...

  5. Co-estimation of state-of-charge, capacity and resistance for lithium-ion batteries based on a high-fidelity electrochemical model

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhang, Lei; Zhu, Jianguo; Wang, Guoxiu; Jiang, Jiuchun

    2016-01-01

    Highlights: • The numerical solution for an electrochemical model is presented. • Trinal PI observers are used to concurrently estimate SOC, capacity and resistance. • An iteration-approaching method is incorporated to enhance estimation performance. • The robustness against aging and temperature variations is experimentally verified. - Abstract: Lithium-ion batteries have been widely used as enabling energy storage in many industrial fields. Accurate modeling and state estimation play fundamental roles in ensuring safe, reliable and efficient operation of lithium-ion battery systems. A physics-based electrochemical model (EM) is highly desirable for its inherent ability to push batteries to operate at their physical limits. For state-of-charge (SOC) estimation, the continuous capacity fade and resistance deterioration are more prone to erroneous estimation results. In this paper, trinal proportional-integral (PI) observers with a reduced physics-based EM are proposed to simultaneously estimate SOC, capacity and resistance for lithium-ion batteries. Firstly, a numerical solution for the employed model is derived. PI observers are then developed to realize the co-estimation of battery SOC, capacity and resistance. The moving-window ampere-hour counting technique and the iteration-approaching method are also incorporated for the estimation accuracy improvement. The robustness of the proposed approach against erroneous initial values, different battery cell aging levels and ambient temperatures is systematically evaluated, and the experimental results verify the effectiveness of the proposed method.

  6. Modeling and estimation of a low degree geopotential model from terrestrial gravity data

    Science.gov (United States)

    Pavlis, Nikolaos K.

    1988-01-01

    The development of appropriate modeling and adjustment procedures for the estimation of harmonic coefficients of the geopotential, from surface gravity data was studied, in order to provide an optimum way of utilizing the terrestrial gravity information in combination solutions currently developed at NASA/Goddard Space Flight Center, for use in the TOPEX/POSEIDON mission. The mathematical modeling was based on the fundamental boundary condition of the linearized Molodensky boundary value problem. Atmospheric and ellipsoidal corrections were applied to the surface anomalies. Terrestrial gravity solutions were found to be in good agreement with the satellite ones over areas which are well surveyed (gravimetrically), such as North America or Australia. However, systematic differences between the terrestrial only models and GEMT1, over extended regions in Africa, the Soviet Union, and China were found. In Africa, gravity anomaly differences on the order of 20 mgals and undulation differences on the order of 15 meters, over regions extending 2000 km in diameter, occur. Comparisons of the GEMT1 implied undulations with 32 well distributed Doppler derived undulations gave an RMS difference of 2.6 m, while corresponding comparison with undulations implied by the terrestrial solution gave RMS difference on the order of 15 m, which implies that the terrestrial data in that region are substantially in error.

  7. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Comparacion de modelos de Educacion Sexual en el conocimiento y cambio de actitudes en practicas sexuales por alumnos de nivel superior en la region de Caguas, Puerto Rico

    Science.gov (United States)

    Juan, Vallejo Ramos L.

    In opposition to the Sexual Education Traditional Model (SETM) that is used in the state schools of Puerto Rico, the Health Beliefs Model (HBM) appears. It facilitates a curricular design that improves the ability of the students to respond to the group pressure by means of attitudes that stimulate sexual conducts of smaller risk of propagation of the Sexually Transmitted Diseases (STD). In addition, it provides activities to increase the self-esteem, the communication and the decision making. This investigation had the intention to compare the SETM and the HBM in the increase of knowledge and change of attitudes of high risk of propagation of the STD using a validated questionnaire (Agency of the United States for the International-USAID Development), named "Endesa 2007" and, adapted to Puerto Rico by the Dra.Marta Collazo to a sample of students between the 17 and 19 years of 2 state schools of San Lorenzo, as a pretest, and, selected by convenience. Then, a 10 hours training was administered to half of the students using the SETM to STD and condom use lessons. The other half of the students received additional lessons using the HBM. Finally, both groups took the questionnaire again as a posttest. The sample of students, in average, did not reach the knowledge and basic levels of attitudes towards the STD in the pretest. This reflected 2 possible implications on the SETM. In first place, that the way in which the STD is implemented as part of the Sexual Education curriculum is inefficient. Secondly, the possibility that the acquired information or attitudes does not have permanence. Culminated the questionnaire, the HBM increase the knowledge of the STD in 0.41 points (average) over the SETM. There was not a significant difference between both models, in attitudes, implying that both models are equally effective. The findings suggests that the HBM is more effective increasing the knowledge on the STD, but equally effective than the SETM in attitude change for the

  9. Estimating shallow groundwater recharge in the headwaters of the Liverpool Plains using SWAT

    OpenAIRE

    Sun, H.; Cornish, P.S.

    2005-01-01

    Metadata only record A physically based catchment model (SWAT) was used for recharge estimation in the headwaters of the Liverpool Plains in NSW, Australia. The study used water balance modelling at the catchment scale to derive parameters for long-term recharge estimation. The derived parameters were further assessed at a subcatchment scale. Modelling results suggest that recharge occurs only in wet years, and is dominated by a few significant years or periods. The results were matched by...

  10. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  11. Global seasonal strain and stress models derived from GRACE loading, and their impact on seismicity

    Science.gov (United States)

    Chanard, K.; Fleitout, L.; Calais, E.; Craig, T. J.; Rebischung, P.; Avouac, J. P.

    2017-12-01

    Loading by continental water, atmosphere and oceans deforms the Earth at various spatio-temporal scales, inducing crustal and mantelic stress perturbations that may play a role in earthquake triggering.Deformation of the Earth by this surface loading is observed in GNSS position time series. While various models predict well vertical observations, explaining horizontal displacements remains challenging. We model the elastic deformation induced by loading derived from GRACE for coefficients 2 and higher. We estimate the degree-1 deformation field by comparison between predictions of our model and IGS-repro2 solutions at a globally distributed network of 700 GNSS sites, separating the horizontal and vertical components to avoid biases between components. The misfit between model and data is reduced compared to previous studies, particularly on the horizontal component. The associated geocenter motion time series are consistent with results derived from other datasets. We also discuss the impact on our results of systematic errors in GNSS geodetic products, in particular of the draconitic error.We then compute stress tensors time series induced by GRACE loads and discuss the potential link between large scale seasonal mass redistributions and seismicity. Within the crust, we estimate hydrologically induced stresses in the intraplate New Madrid Seismic Zone, where secular stressing rates are unmeasurably low. We show that a significant variation in the rate of micro-earthquakes at annual and multi-annual timescales coincides with stresses induced by hydrological loading in the upper Mississippi embayment, with no significant phase-lag, directly modulating regional seismicity. We also investigate pressure variations in the mantle transition zone and discuss potential correlations between the statistically significant observed seasonality of deep-focus earthquakes, most likely due to mineralogical transformations, and surface hydrological loading.

  12. Comparison of human adipose-derived stem cells and bone marrow-derived stem cells in a myocardial infarction model

    DEFF Research Database (Denmark)

    Rasmussen, Jeppe; Frøbert, Ole; Holst-Hansen, Claus

    2014-01-01

    Background: Treatment of myocardial infarction with bone marrow-derived mesenchymal stem cells and recently also adipose-derived stem cells has shown promising results. In contrast to clinical trials and their use of autologous bone marrow-derived cells from the ischemic patient, the animal...... myocardial infarction models are often using young donors and young, often immune-compromised, recipient animals. Our objective was to compare bone marrow-derived mesenchymal stem cells with adipose-derived stem cells from an elderly ischemic patient in the treatment of myocardial infarction, using a fully...... grown non-immunecompromised rat model. Methods: Mesenchymal stem cells were isolated from adipose tissue and bone marrow and compared with respect to surface markers and proliferative capability. To compare the regenerative potential of the two stem cell populations, male Sprague-Dawley rats were...

  13. A note on modeling of tumor regression for estimation of radiobiological parameters

    International Nuclear Information System (INIS)

    Zhong, Hualiang; Chetty, Indrin

    2014-01-01

    Purpose: Accurate calculation of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on derived parameters. In this study, the authors have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for estimation of radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time T d , half-life of dead cells T r , and cell survival fraction SF D under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models: Chvetsov's model (C-model) and Lim's model (L-model). The C-model and L-model were optimized with the parameter T d fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43 ± 0.08, and the half-life of dead cells averaged over the six patients is 17.5 ± 3.2 days. The parameters T r and SF D optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the T d -fixed C-model, and by 32.1% and 112.3% from those optimized with the T d -fixed L-model, respectively. Conclusions: The Z-model was analytically constructed from the differential equations of cell populations that describe changes in the number of different tumor cells during the course of radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The generated model and its optimization method may help develop high-quality treatment regimens for individual patients

  14. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  15. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  16. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  17. Effects of topographic data quality on estimates of shallow slope stability using different regolith depth models

    Science.gov (United States)

    Baum, Rex L.

    2017-01-01

    Thickness of colluvium or regolith overlying bedrock or other consolidated materials is a major factor in determining stability of unconsolidated earth materials on steep slopes. Many efforts to model spatially distributed slope stability, for example to assess susceptibility to shallow landslides, have relied on estimates of constant thickness, constant depth, or simple models of thickness (or depth) based on slope and other topographic variables. Assumptions of constant depth or thickness rarely give satisfactory results. Geomorphologists have devised a number of different models to represent the spatial variability of regolith depth and applied them to various settings. I have applied some of these models that can be implemented numerically to different study areas with different types of terrain and tested the results against available depth measurements and landslide inventories. The areas include crystalline rocks of the Colorado Front Range, and gently dipping sedimentary rocks of the Oregon Coast Range. Model performance varies with model, terrain type, and with quality of the input topographic data. Steps in contour-derived 10-m digital elevation models (DEMs) introduce significant errors into the predicted distribution of regolith and landslides. Scan lines, facets, and other artifacts further degrade DEMs and model predictions. Resampling to a lower grid-cell resolution can mitigate effects of facets in lidar DEMs of areas where dense forest severely limits ground returns. Due to its higher accuracy and ability to penetrate vegetation, lidar-derived topography produces more realistic distributions of cover and potential landslides than conventional photogrammetrically derived topographic data.

  18. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  19. A Fresh Start for Flood Estimation in Ungauged Basins

    Science.gov (United States)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for

  20. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  1. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  3. The APT model as reduced-rank regression

    NARCIS (Netherlands)

    Bekker, P.A.; Dobbelstein, P.; Wansbeek, T.J.

    Integrating the two steps of an arbitrage pricing theory (APT) model leads to a reduced-rank regression (RRR) model. So the results on RRR can be used to estimate APT models, making estimation very simple. We give a succinct derivation of estimation of RRR, derive the asymptotic variance of RRR

  4. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  5. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    Science.gov (United States)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  6. A fractal derivative constitutive model for three stages in granite creep

    Directory of Open Access Journals (Sweden)

    R. Wang

    Full Text Available In this paper, by replacing the Newtonian dashpot with the fractal dashpot and considering damage effect, a new constitutive model is proposed in terms of time fractal derivative to describe the full creep regions of granite. The analytic solutions of the fractal derivative creep constitutive equation are derived via scaling transform. The conventional triaxial compression creep tests are performed on MTS 815 rock mechanics test system to verify the efficiency of the new model. The granite specimen is taken from Beishan site, the most potential area for the China’s high-level radioactive waste repository. It is shown that the proposed fractal model can characterize the creep behavior of granite especially in accelerating stage which the classical models cannot predict. The parametric sensitivity analysis is also conducted to investigate the effects of model parameters on the creep strain of granite. Keywords: Beishan granite, Fractal derivative, Damage evolution, Scaling transformation

  7. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  8. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  9. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  10. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  11. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  12. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  13. V and V-based remaining fault estimation model for safety–critical software of a nuclear power plant

    International Nuclear Information System (INIS)

    Eom, Heung-seop; Park, Gee-yong; Jang, Seung-cheol; Son, Han Seong; Kang, Hyun Gook

    2013-01-01

    Highlights: ► A software fault estimation model based on Bayesian Nets and V and V. ► Use of quantified data derived from qualitative V and V results. ► Faults insertion and elimination process was modeled in the context of probability. ► Systematically estimates the expected number of remaining faults. -- Abstract: Quantitative software reliability measurement approaches have some limitations in demonstrating the proper level of reliability in cases of safety–critical software. One of the more promising alternatives is the use of software development quality information. Particularly in the nuclear industry, regulatory bodies in most countries use both probabilistic and deterministic measures for ensuring the reliability of safety-grade digital computers in NPPs. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety–critical software. In addition software Verification and Validation (V and V) play an important role in this process. In this light, we propose a V and V-based fault estimation method using Bayesian Nets to estimate the remaining faults for safety–critical software after the software development life cycle is completed. By modeling the fault insertion and elimination processes during the whole development phases, the proposed method systematically estimates the expected number of remaining faults.

  14. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    Science.gov (United States)

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and

  15. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    by Shih and Louis (Biometrics vol. 51, pp. 1384-1399, 1995b) and Glidden (Lifetime Data Analysis vol. 6, pp. 141-156, 2000). Because register based family studies often involve very large cohorts a method for analysing a sampled cohort is also derived together with the asymptotic properties...... of the estimators. The proposed methods are studied in simulations and the estimators are found to be highly efficient. Finally, the methods are applied to a study of mortality in twins....

  16. Modeling and Forecasting Average Temperature for Weather Derivative Pricing

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2015-01-01

    Full Text Available The main purpose of this paper is to present a feasible model for the daily average temperature on the area of Zhengzhou and apply it to weather derivatives pricing. We start by exploring the background of weather derivatives market and then use the 62 years of daily historical data to apply the mean-reverting Ornstein-Uhlenbeck process to describe the evolution of the temperature. Finally, Monte Carlo simulations are used to price heating degree day (HDD call option for this city, and the slow convergence of the price of the HDD call can be found through taking 100,000 simulations. The methods of the research will provide a frame work for modeling temperature and pricing weather derivatives in other similar places in China.

  17. Evaluation of the Effectiveness of Nutritional Education based on Health Belief Model on Self-Esteem and BMI of Overweight and at Risk of Overweight Adolescent Girls

    Directory of Open Access Journals (Sweden)

    Leili Rabiei

    2017-08-01

    Full Text Available Background Due to significant increases in the prevalence of overweight and obesity in adolescents in developed countries, much attention has been focused on this issue. This study aimed to determine the effectiveness of nutritional education based on Health Belief Model (HBM on self-esteem and body mass index (BMI of overweight and at risk of overweight adolescent girls. Materials and Methods: The study subjects consist of 140 female students recruited from two high schools, who were randomly allocated to the intervention (n=70 and control (n=70 groups. The data collection instrument included sections on socio-demographic status, transportation method, physical status, and knowledge and attitudes of the students towards nutrition, which was designed according to HBM. As the intervention, model-based educational program was implemented through six 60-minute sessions, focusing on the overweight and at-risk students. Results were compared in the beginning, and three months after the intervention to find the possible impacts. Results: Average score of model structures and self-esteem of students in both groups had no significant difference at baseline, but immediately after the intervention and 3 months after treatment, the mean component scores were significantly higher in intervention group than controls (P

  18. Estimating HIV incidence among adults in Kenya and Uganda: a systematic comparison of multiple methods.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    2011-03-01

    Full Text Available Several approaches have been used for measuring HIV incidence in large areas, yet each presents specific challenges in incidence estimation.We present a comparison of incidence estimates for Kenya and Uganda using multiple methods: 1 Epidemic Projections Package (EPP and Spectrum models fitted to HIV prevalence from antenatal clinics (ANC and national population-based surveys (NPS in Kenya (2003, 2007 and Uganda (2004/2005; 2 a survey-derived model to infer age-specific incidence between two sequential NPS; 3 an assay-derived measurement in NPS using the BED IgG capture enzyme immunoassay, adjusted for misclassification using a locally derived false-recent rate (FRR for the assay; (4 community cohorts in Uganda; (5 prevalence trends in young ANC attendees. EPP/Spectrum-derived and survey-derived modeled estimates were similar: 0.67 [uncertainty range: 0.60, 0.74] and 0.6 [confidence interval: (CI 0.4, 0.9], respectively, for Uganda (2005 and 0.72 [uncertainty range: 0.70, 0.74] and 0.7 [CI 0.3, 1.1], respectively, for Kenya (2007. Using a local FRR, assay-derived incidence estimates were 0.3 [CI 0.0, 0.9] for Uganda (2004/2005 and 0.6 [CI 0, 1.3] for Kenya (2007. Incidence trends were similar for all methods for both Uganda and Kenya.Triangulation of methods is recommended to determine best-supported estimates of incidence to guide programs. Assay-derived incidence estimates are sensitive to the level of the assay's FRR, and uncertainty around high FRRs can significantly impact the validity of the estimate. Systematic evaluations of new and existing incidence assays are needed to the study the level, distribution, and determinants of the FRR to guide whether incidence assays can produce reliable estimates of national HIV incidence.

  19. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  20. An Inverse Modeling Approach to Estimating Phytoplankton Pigment Concentrations from Phytoplankton Absorption Spectra

    Science.gov (United States)

    Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.

    2011-01-01

    Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.

  1. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  2. Use of Hedonic Prices to Estimate Capitalization Rate

    OpenAIRE

    Gaetano Lisi

    2015-01-01

    In this paper, a model of income capitalization is developed where hedonic prices play a key role in estimating the going-in capitalization rate. Precisely, the hedonic functions for rental and selling prices are introduced into a basic model of income capitalization. From the modified model, it is possible to derive a direct relationship between hedonic prices and capitalization rate. An advantage of the proposed approach is that estimation of the capitalization rate can be made without cons...

  3. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  4. Estimating varying coefficients for partial differential equation models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  5. Comparison of Satellite Rainfall Estimates and Rain Gauge Measurements in Italy, and Impact on Landslide Modeling

    Directory of Open Access Journals (Sweden)

    Mauro Rossi

    2017-12-01

    Full Text Available Landslides can be triggered by intense or prolonged rainfall. Rain gauge measurements are commonly used to predict landslides even if satellite rainfall estimates are available. Recent research focuses on the comparison of satellite estimates and gauge measurements. The rain gauge data from the Italian network (collected in the system database “Verifica Rischio Frana”, VRF are compared with the National Aeronautics and Space Administration (NASA Tropical Rainfall Measuring Mission (TRMM products. For the purpose, we couple point gauge and satellite rainfall estimates at individual grid cells, evaluating the correlation between gauge and satellite data in different morpho-climatological conditions. We then analyze the statistical distributions of both rainfall data types and the rainfall events derived from them. Results show that satellite data underestimates ground data, with the largest differences in mountainous areas. Power-law models, are more appropriate to correlate gauge and satellite data. The gauge and satellite-based products exhibit different statistical distributions and the rainfall events derived from them differ. In conclusion, satellite rainfall cannot be directly compared with ground data, requiring local investigation to account for specific morpho-climatological settings. Results suggest that satellite data can be used for forecasting landslides, only performing a local scaling between satellite and ground data.

  6. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    Science.gov (United States)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  7. On the assimilation of satellite derived soil moisture in numerical weather prediction models

    Science.gov (United States)

    Drusch, M.

    2006-12-01

    Satellite derived surface soil moisture data sets are readily available and have been used successfully in hydrological applications. In many operational numerical weather prediction systems the initial soil moisture conditions are analysed from the modelled background and 2 m temperature and relative humidity. This approach has proven its efficiency to improve surface latent and sensible heat fluxes and consequently the forecast on large geographical domains. However, since soil moisture is not always related to screen level variables, model errors and uncertainties in the forcing data can accumulate in root zone soil moisture. Remotely sensed surface soil moisture is directly linked to the model's uppermost soil layer and therefore is a stronger constraint for the soil moisture analysis. Three data assimilation experiments with the Integrated Forecast System (IFS) of the European Centre for Medium-range Weather Forecasts (ECMWF) have been performed for the two months period of June and July 2002: A control run based on the operational soil moisture analysis, an open loop run with freely evolving soil moisture, and an experimental run incorporating bias corrected TMI (TRMM Microwave Imager) derived soil moisture over the southern United States through a nudging scheme using 6-hourly departures. Apart from the soil moisture analysis, the system setup reflects the operational forecast configuration including the atmospheric 4D-Var analysis. Soil moisture analysed in the nudging experiment is the most accurate estimate when compared against in-situ observations from the Oklahoma Mesonet. The corresponding forecast for 2 m temperature and relative humidity is almost as accurate as in the control experiment. Furthermore, it is shown that the soil moisture analysis influences local weather parameters including the planetary boundary layer height and cloud coverage. The transferability of the results to other satellite derived soil moisture data sets will be discussed.

  8. Some remarks on the small-distance derivative model

    International Nuclear Information System (INIS)

    Jannussis, A.

    1985-01-01

    In the present work the new expressions of the derivatives for small distance are investigated according to Gonzales-Diaz model. This model is noncanonical, is a particular case of the Lie-admissible formulation and has applications for distance and time scales comparable with the Planck dimensions

  9. Are individual based models a suitable approach to estimate population vulnerability? - a case study

    Directory of Open Access Journals (Sweden)

    Eva Maria Griebeler

    2011-04-01

    Full Text Available European populations of the Large Blue Butterfly Maculinea arion have experienced severe declines in the last decades, especially in the northern part of the species range. This endangered lycaenid butterfly needs two resources for development: flower buds of specific plants (Thymus spp., Origanum vulgare, on which young caterpillars briefly feed, and red ants of the genus Myrmica, whose nests support caterpillars during a prolonged final instar. I present an analytically solvable deterministic model to estimate the vulnerability of populations of M. arion. Results obtained from the sensitivity analysis of this mathematical model (MM are contrasted to the respective results that had been derived from a spatially explicit individual based model (IBM for this butterfly. I demonstrate that details in landscape configuration which are neglected by the MM but are easily taken into consideration by the IBM result in a different degree of intraspecific competition of caterpillars on flower buds and within host ant nests. The resulting differences in mortalities of caterpillars lead to erroneous estimates of the extinction risk of a butterfly population living in habitat with low food plant coverage and low abundance in host ant nests. This observation favors the use of an individual based modeling approach over the deterministic approach at least for the management of this threatened butterfly.

  10. Estimating the global incidence of traumatic spinal cord injury.

    Science.gov (United States)

    Fitzharris, M; Cripps, R A; Lee, B B

    2014-02-01

    Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.

  11. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  12. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  13. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  14. Multinomial N-mixture models improve the applicability of electrofishing for developing population estimates of stream-dwelling Smallmouth Bass

    Science.gov (United States)

    Mollenhauer, Robert; Brewer, Shannon K.

    2017-01-01

    Failure to account for variable detection across survey conditions constrains progressive stream ecology and can lead to erroneous stream fish management and conservation decisions. In addition to variable detection’s confounding long-term stream fish population trends, reliable abundance estimates across a wide range of survey conditions are fundamental to establishing species–environment relationships. Despite major advancements in accounting for variable detection when surveying animal populations, these approaches remain largely ignored by stream fish scientists, and CPUE remains the most common metric used by researchers and managers. One notable advancement for addressing the challenges of variable detection is the multinomial N-mixture model. Multinomial N-mixture models use a flexible hierarchical framework to model the detection process across sites as a function of covariates; they also accommodate common fisheries survey methods, such as removal and capture–recapture. Effective monitoring of stream-dwelling Smallmouth Bass Micropterus dolomieu populations has long been challenging; therefore, our objective was to examine the use of multinomial N-mixture models to improve the applicability of electrofishing for estimating absolute abundance. We sampled Smallmouth Bass populations by using tow-barge electrofishing across a range of environmental conditions in streams of the Ozark Highlands ecoregion. Using an information-theoretic approach, we identified effort, water clarity, wetted channel width, and water depth as covariates that were related to variable Smallmouth Bass electrofishing detection. Smallmouth Bass abundance estimates derived from our top model consistently agreed with baseline estimates obtained via snorkel surveys. Additionally, confidence intervals from the multinomial N-mixture models were consistently more precise than those of unbiased Petersen capture–recapture estimates due to the dependency among data sets in the

  15. On the equivalence between the thirring model and a derivative coupling model

    International Nuclear Information System (INIS)

    Gomes, M.; Silva, A.J. da.

    1986-07-01

    The equivalence between the Thirring model and the fermionic sector of the theory of a Dirac field interacting via derivate coupling with two boson fields is analysed. For a certain choice of the parameters the two models have the same fermionic Green functions. (Author) [pt

  16. Enhanced Wnt signaling improves bone mass and strength, but not brittleness, in the Col1a1(+/mov13) mouse model of type I Osteogenesis Imperfecta.

    Science.gov (United States)

    Jacobsen, Christina M; Schwartz, Marissa A; Roberts, Heather J; Lim, Kyung-Eun; Spevak, Lyudmila; Boskey, Adele L; Zurakowski, David; Robling, Alexander G; Warman, Matthew L

    2016-09-01

    Osteogenesis Imperfecta (OI) comprises a group of genetic skeletal fragility disorders. The mildest form of OI, Osteogenesis Imperfecta type I, is frequently caused by haploinsufficiency mutations in COL1A1, the gene encoding the α1(I) chain of type 1 collagen. Children with OI type I have a 95-fold higher fracture rate compared to unaffected children. Therapies for OI type I in the pediatric population are limited to anti-catabolic agents. In adults with osteoporosis, anabolic therapies that enhance Wnt signaling in bone improve bone mass, and ongoing clinical trials are determining if these therapies also reduce fracture risk. We performed a proof-of-principle experiment in mice to determine whether enhancing Wnt signaling in bone could benefit children with OI type I. We crossed a mouse model of OI type I (Col1a1(+/Mov13)) with a high bone mass (HBM) mouse (Lrp5(+/p.A214V)) that has increased bone strength from enhanced Wnt signaling. Offspring that inherited the OI and HBM alleles had higher bone mass and strength than mice that inherited the OI allele alone. However, OI+HBM and OI mice still had bones with lower ductility compared to wild-type mice. We conclude that enhancing Wnt signaling does not make OI bone normal, but does improve bone properties that could reduce fracture risk. Therefore, agents that enhance Wnt signaling are likely to benefit children and adults with OI type 1. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Foundational workplace safety and health competencies for the emerging workforce☆

    Science.gov (United States)

    Okun, Andrea H.; Guerin, Rebecca J.; Schulte, Paul A.

    2016-01-01

    Introduction Young workers (aged 15–24) suffer disproportionately from workplace injuries, with a nonfatal injury rate estimated to be two times higher than among workers age 25 or over. These workers make up approximately 9% of the U.S. workforce and studies have shown that nearly 80% of high school students work at some point during high school. Although young worker injuries are a pressing public health problem, the critical knowledge and skills needed to prepare youth for safe and healthy work are missing from most frameworks used to prepare the emerging U.S. workforce. Methods A framework of foundational workplace safety and health knowledge and skills (the NIOSH 8 Core Competencies)was developed based on the Health Belief Model (HBM). Results The proposed NIOSH Core Competencies utilize the HBM to provide a framework for foundational workplace safety and health knowledge and skills. An examination of how these competencies and the HBM apply to actions that workers take to protect themselves is provided. The social and physical environments that influence these actions are also discussed. Conclusions The NIOSH 8 Core Competencies, grounded in one of the most widely used health behavior theories, fill a critical gap in preparing the emerging U.S. workforce to be cognizant of workplace risks. Practical applications Integration of the NIOSH 8 Core Competencies into school curricula is one way to ensure that every young person has the foundational workplace safety and health knowledge and skills to participate in, and benefit from, safe and healthy work. National Safety Council and Elsevier Ltd. All rights reserved. PMID:27846998

  18. Derivative interactions and perturbative UV contributions in N Higgs doublet models

    Energy Technology Data Exchange (ETDEWEB)

    Kikuta, Yohei [KEK Theory Center, KEK, Tsukuba (Japan); The Graduate University for Advanced Studies, Department of Particle and Nuclear Physics, Tsukuba (Japan); Yamamoto, Yasuhiro [Universidad de Granada, Deportamento de Fisica Teorica y del Cosmos, Facultad de Ciencias and CAFPE, Granada (Spain)

    2016-05-15

    We study the Higgs derivative interactions on models including arbitrary number of the Higgs doublets. These interactions are generated by two ways. One is higher order corrections of composite Higgs models, and the other is integration of heavy scalars and vectors. In the latter case, three point couplings between the Higgs doublets and these heavy states are the sources of the derivative interactions. Their representations are constrained to couple with the doublets. We explicitly calculate all derivative interactions generated by integrating out. Their degrees of freedom and conditions to impose the custodial symmetry are discussed. We also study the vector boson scattering processes with a couple of two Higgs doublet models to see experimental signals of the derivative interactions. They are differently affected by each heavy field. (orig.)

  19. Risk estimation using probability machines

    Science.gov (United States)

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  20. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  1. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  2. Genetic analysis of high bone mass cases from the BARCOS cohort of Spanish postmenopausal women.

    Directory of Open Access Journals (Sweden)

    Patricia Sarrión

    Full Text Available The aims of the study were to establish the prevalence of high bone mass (HBM in a cohort of Spanish postmenopausal women (BARCOS and to assess the contribution of LRP5 and DKK1 mutations and of common bone mineral density (BMD variants to a HBM phenotype. Furthermore, we describe the expression of several osteoblast-specific and Wnt-pathway genes in primary osteoblasts from two HBM cases. A 0.6% of individuals (10/1600 displayed Z-scores in the HBM range (sum Z-score >4. While no mutation in the relevant exons of LRP5 was detected, a rare missense change in DKK1 was found (p.Y74F, which cosegregated with the phenotype in a small pedigree. Fifty-five BMD SNPs from Estrada et al. [NatGenet 44:491-501,2012] were genotyped in the HBM cases to obtain risk scores for each individual. In this small group of samples, Z-scores were found inversely related to risk scores, suggestive of a polygenic etiology. There was a single exception, which may be explained by a rare penetrant genetic variant, counterbalancing the additive effect of the risk alleles. The expression analysis in primary osteoblasts from two HBM cases and five controls suggested that IL6R, DLX3, TWIST1 and PPARG are negatively related to Z-score. One HBM case presented with high levels of RUNX2, while the other displayed very low SOX6. In conclusion, we provide evidence of lack of LRP5 mutations and of a putative HBM-causing mutation in DKK1. Additionally, we present SNP genotyping and expression results that suggest additive effects of several genes for HBM.

  3. A generalized linear model for estimating spectrotemporal receptive fields from responses to natural sounds.

    Directory of Open Access Journals (Sweden)

    Ana Calabrese

    2011-01-01

    Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.

  4. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  5. Can Airborne Laser Scanning (ALS and Forest Estimates Derived from Satellite Images Be Used to Predict Abundance and Species Richness of Birds and Beetles in Boreal Forest?

    Directory of Open Access Journals (Sweden)

    Eva Lindberg

    2015-04-01

    Full Text Available In managed landscapes, conservation planning requires effective methods to identify high-biodiversity areas. The objective of this study was to evaluate the potential of airborne laser scanning (ALS and forest estimates derived from satellite images extracted at two spatial scales for predicting the stand-scale abundance and species richness of birds and beetles in a managed boreal forest landscape. Multiple regression models based on forest data from a 50-m radius (i.e., corresponding to a homogenous forest stand had better explanatory power than those based on a 200-m radius (i.e., including also parts of adjacent stands. Bird abundance and species richness were best explained by the ALS variables “maximum vegetation height” and “vegetation cover between 0.5 and 3 m” (both positive. Flying beetle abundance and species richness, as well as epigaeic (i.e., ground-living beetle richness were best explained by a model including the ALS variable “maximum vegetation height” (positive and the satellite-derived variable “proportion of pine” (negative. Epigaeic beetle abundance was best explained by “maximum vegetation height” at 50 m (positive and “stem volume” at 200 m (positive. Our results show that forest estimates derived from satellite images and ALS data provide complementary information for explaining forest biodiversity patterns. We conclude that these types of remote sensing data may provide an efficient tool for conservation planning in managed boreal landscapes.

  6. Oscillometric blood pressure estimation by combining nonparametric bootstrap with Gaussian mixture model.

    Science.gov (United States)

    Lee, Soojeong; Rajan, Sreeraman; Jeon, Gwanggil; Chang, Joon-Hyuk; Dajani, Hilmi R; Groza, Voicu Z

    2017-06-01

    Blood pressure (BP) is one of the most important vital indicators and plays a key role in determining the cardiovascular activity of patients. This paper proposes a hybrid approach consisting of nonparametric bootstrap (NPB) and machine learning techniques to obtain the characteristic ratios (CR) used in the blood pressure estimation algorithm to improve the accuracy of systolic blood pressure (SBP) and diastolic blood pressure (DBP) estimates and obtain confidence intervals (CI). The NPB technique is used to circumvent the requirement for large sample set for obtaining the CI. A mixture of Gaussian densities is assumed for the CRs and Gaussian mixture model (GMM) is chosen to estimate the SBP and DBP ratios. The K-means clustering technique is used to obtain the mixture order of the Gaussian densities. The proposed approach achieves grade "A" under British Society of Hypertension testing protocol and is superior to the conventional approach based on maximum amplitude algorithm (MAA) that uses fixed CR ratios. The proposed approach also yields a lower mean error (ME) and the standard deviation of the error (SDE) in the estimates when compared to the conventional MAA method. In addition, CIs obtained through the proposed hybrid approach are also narrower with a lower SDE. The proposed approach combining the NPB technique with the GMM provides a methodology to derive individualized characteristic ratio. The results exhibit that the proposed approach enhances the accuracy of SBP and DBP estimation and provides narrower confidence intervals for the estimates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Potential of ALOS2 and NDVI to Estimate Forest Above-Ground Biomass, and Comparison with Lidar-Derived Estimates

    Directory of Open Access Journals (Sweden)

    Gaia Vaglio Laurin

    2016-12-01

    Full Text Available Remote sensing supports carbon estimation, allowing the upscaling of field measurements to large extents. Lidar is considered the premier instrument to estimate above ground biomass, but data are expensive and collected on-demand, with limited spatial and temporal coverage. The previous JERS and ALOS SAR satellites data were extensively employed to model forest biomass, with literature suggesting signal saturation at low-moderate biomass values, and an influence of plot size on estimates accuracy. The ALOS2 continuity mission since May 2014 produces data with improved features with respect to the former ALOS, such as increased spatial resolution and reduced revisit time. We used ALOS2 backscatter data, testing also the integration with additional features (SAR textures and NDVI from Landsat 8 data together with ground truth, to model and map above ground biomass in two mixed forest sites: Tahoe (California and Asiago (Alps. While texture was useful to improve the model performance, the best model was obtained using joined SAR and NDVI (R2 equal to 0.66. In this model, only a slight saturation was observed, at higher levels than what usually reported in literature for SAR; the trend requires further investigation but the model confirmed the complementarity of optical and SAR datatypes. For comparison purposes, we also generated a biomass map for Asiago using lidar data, and considered a previous lidar-based study for Tahoe; in these areas, the observed R2 were 0.92 for Tahoe and 0.75 for Asiago, respectively. The quantitative comparison of the carbon stocks obtained with the two methods allows discussion of sensor suitability. The range of local variation captured by lidar is higher than those by SAR and NDVI, with the latter showing overestimation. However, this overestimation is very limited for one of the study areas, suggesting that when the purpose is the overall quantification of the stored carbon, especially in areas with high carbon

  8. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  9. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  10. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  11. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  12. Currents, HF Radio-derived, Monterey Bay, Normal Model, Zonal, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The data is the zonal component of ocean surface currents derived from High Frequency Radio-derived measurements, with missing values filled in by a normal model....

  13. Default Bayesian Estimation of the Fundamental Frequency

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    Joint fundamental frequency and model order esti- mation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real....... Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the state-of-the-art joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance...

  14. Collective animal behavior from Bayesian estimation and probability matching.

    Directory of Open Access Journals (Sweden)

    Alfonso Pérez-Escudero

    2011-11-01

    Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.

  15. Top-down NOX Emissions of European Cities Derived from Modelled and Spaceborne Tropospheric NO2 Columns

    Science.gov (United States)

    Verstraeten, W. W.; Boersma, K. F.; Douros, J.; Williams, J. E.; Eskes, H.; Delcloo, A. W.

    2017-12-01

    High nitrogen oxides (NOX = NO + NO2) concentrations near the surface impact humans and ecosystems badly and play a key role in tropospheric chemistry. NO2 is an important precursor of tropospheric ozone (O3) which in turn affects the production of the hydroxyl radical controlling the chemical lifetime of key atmospheric pollutants and reactive greenhouse gases. Combustion from industrial, traffic and household activities in large and densely populated urban areas result in high NOX emissions. Accurate mapping of these emissions is essential but hard to do since reported emissions factors may differ from real-time emissions in order of magnitude. Modelled NO2 levels and lifetimes also have large associated uncertainties and overestimation in the chemical lifetime which may mask missing NOX chemistry in current chemistry transport models (CTM's). The simultaneously estimation of both the NO2 lifetime and as well as the concentrations by applying the Exponentially Modified Gaussian (EMG) method on tropospheric NO2 columns lines densities should improve the surface NOX emission estimates. Here we evaluate if the EMG methodology applied on the tropospheric NO2 columns simulated by the LOTOS-EUROS (Long Term Ozone Simulation-European Ozone Simulation) CTM can reproduce the NOX emissions used as model input. First we process both the modelled tropospheric NO2 columns for the period April-September 2013 for 21 selected European urban areas under windy conditions (averaged vertical wind speeds between surface and 500 m from ECMWF > 2 m s-1) as well as the accompanying OMI (Ozone Monitoring Instrument) data providing us with real-time observation-based estimates of midday NO2 columns. Then we compare the top-down derived surface NOX emissions with the 2011 MACC-III emission inventory, used in the CTM as input to simulate the NO2 columns. For cities where NOX emissions can be assumed as originating from one large source good agreement is found between the top-down derived

  16. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  17. Global Validation of MODIS Atmospheric Profile-Derived Near-Surface Air Temperature and Dew Point Estimates

    Science.gov (United States)

    Famiglietti, C.; Fisher, J.; Halverson, G. H.

    2017-12-01

    This study validates a method of remote sensing near-surface meteorology that vertically interpolates MODIS atmospheric profiles to surface pressure level. The extraction of air temperature and dew point observations at a two-meter reference height from 2001 to 2014 yields global moderate- to fine-resolution near-surface temperature distributions that are compared to geographically and temporally corresponding measurements from 114 ground meteorological stations distributed worldwide. This analysis is the first robust, large-scale validation of the MODIS-derived near-surface air temperature and dew point estimates, both of which serve as key inputs in models of energy, water, and carbon exchange between the land surface and the atmosphere. Results show strong linear correlations between remotely sensed and in-situ near-surface air temperature measurements (R2 = 0.89), as well as between dew point observations (R2 = 0.77). Performance is relatively uniform across climate zones. The extension of mean climate-wise percent errors to the entire remote sensing dataset allows for the determination of MODIS air temperature and dew point uncertainties on a global scale.

  18. Traveltime approximations and parameter estimation for orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-30

    Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters Building anisotropy models is necessary for seismic modeling and imaging. However, anisotropy estimation is challenging due to the trade-off between inhomogeneity and anisotropy. Luckily, we can estimate the anisotropy parameters if we relate them analytically to traveltimes. Using perturbation theory, we have developed traveltime approximations for orthorhombic media as explicit functions of the anellipticity parameters η1, η2, and Δχ in inhomogeneous background media. The parameter Δχ is related to Tsvankin-Thomsen notation and ensures easier computation of traveltimes in the background model. Specifically, our expansion assumes an inhomogeneous ellipsoidal anisotropic background model, which can be obtained from well information and stacking velocity analysis. We have used the Shanks transform to enhance the accuracy of the formulas. A homogeneous medium simplification of the traveltime expansion provided a nonhyperbolic moveout description of the traveltime that was more accurate than other derived approximations. Moreover, the formulation provides a computationally efficient tool to solve the eikonal equation of an orthorhombic medium, without any constraints on the background model complexity. Although, the expansion is based on the factorized representation of the perturbation parameters, smooth variations of these parameters (represented as effective values) provides reasonable results. Thus, this formulation provides a mechanism to estimate the three effective parameters η1, η2, and Δχ. We have derived Dix-type formulas for orthorhombic medium to convert the effective parameters to their interval values.

  19. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  20. Use of Bayesian Estimates to determine the Volatility Parameter Input in the Black-Scholes and Binomial Option Pricing Models

    Directory of Open Access Journals (Sweden)

    Shu Wing Ho

    2011-12-01

    Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.

  1. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  2. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  3. Estimating emissions from grout pouring operations

    International Nuclear Information System (INIS)

    Ballinger, M.Y.; Hendrickson, D.W.

    1993-08-01

    Grouting is a method for disposal of low-level radioactive waste in which a contaminated solution is mixed into a slurry, poured into a large storage vault, then dried, fixing the contaminants within a stable solid matrix. A model (RELEASE) has been developed to estimate the quantity of aeorsol created during the pouring process. Information and equations derived from spill experiments were used in the model to determine release fractions. This paper discusses the derivation of the release fraction equation used in the code and the model used to account for gravity settling of particles in the vault. The input and results for a base case application are shown

  4. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  5. Evaluating the influence of spatial resolution of Landsat predictors on the accuracy of biomass models for large-area estimation across the eastern USA

    Science.gov (United States)

    Deo, Ram K.; Domke, Grant M.; Russell, Matthew B.; Woodall, Christopher W.; Andersen, Hans-Erik

    2018-05-01

    Aboveground biomass (AGB) estimates for regional-scale forest planning have become cost-effective with the free access to satellite data from sensors such as Landsat and MODIS. However, the accuracy of AGB predictions based on passive optical data depends on spatial resolution and spatial extent of target area as fine resolution (small pixels) data are associated with smaller coverage and longer repeat cycles compared to coarse resolution data. This study evaluated various spatial resolutions of Landsat-derived predictors on the accuracy of regional AGB models at three different sites in the eastern USA: Maine, Pennsylvania-New Jersey, and South Carolina. We combined national forest inventory data with Landsat-derived predictors at spatial resolutions ranging from 30–1000 m to understand the optimal spatial resolution of optical data for large-area (regional) AGB estimation. Ten generic models were developed using the data collected in 2014, 2015 and 2016, and the predictions were evaluated (i) at the county-level against the estimates of the USFS Forest Inventory and Analysis Program which relied on EVALIDator tool and national forest inventory data from the 2009–2013 cycle and (ii) within a large number of strips (~1 km wide) predicted via LiDAR metrics at 30 m spatial resolution. The county-level estimates by the EVALIDator and Landsat models were highly related (R 2 > 0.66), although the R 2 varied significantly across sites and resolution of predictors. The mean and standard deviation of county-level estimates followed increasing and decreasing trends, respectively, with models of coarser resolution. The Landsat-based total AGB estimates were larger than the LiDAR-based total estimates within the strips, however the mean of AGB predictions by LiDAR were mostly within one-standard deviations of the mean predictions obtained from the Landsat-based model at any of the resolutions. We conclude that satellite data at resolutions up to 1000 m provide

  6. Readability, Suitability and Health Content Assessment of Cancer Screening Announcements in Municipal Newspapers in Japan.

    Science.gov (United States)

    Okuhara, Tsuyoshi; Ishikawa, Hirono; Okada, Hiroko; Kiuchi, Takahiro

    2015-01-01

    The objective of this study was to assess the readability, suitability, and health content of cancer screening information in municipal newspapers in Japan. Suitability Assessment of Materials (SAM) and the framework of Health Belief Model (HBM) were used for assessment of municipal newspapers that were published in central Tokyo (23 wards) from January to December 2013. The mean domain SAM scores of content, literacy demand, and layout/typography were considered superior. The SAM scores of interaction with readers, an indication of the models of desirable actions, and elaboration to enhance readers' self-efficacy were low. According to the HBM coding, messages of medical/clinical severity, of social severity, of social benefits, and of barriers of fear were scarce. The articles were generally well written and suitable. However, learning stimulation/motivation was scarce and the HBM constructs were not fully addressed. Articles can be improved to motivate readers to obtain cancer screening by increasing interaction with readers, introducing models of desirable actions and devices to raise readers' self-efficacy, and providing statements of perceived barriers of fear for pain and time constraints, perceived severity, and social benefits and losses.

  7. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro

  8. Variance estimation for generalized Cavalieri estimators

    OpenAIRE

    Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen

    2011-01-01

    The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.

  9. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  10. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  11. Analytic Investigation Into Effect of Population Heterogeneity on Parameter Ratio Estimates

    International Nuclear Information System (INIS)

    Schinkel, Colleen; Carlone, Marco; Warkentin, Brad; Fallone, B. Gino

    2007-01-01

    Purpose: A homogeneous tumor control probability (TCP) model has previously been used to estimate the α/β ratio for prostate cancer from clinical dose-response data. For the ratio to be meaningful, it must be assumed that parameter ratios are not sensitive to the type of tumor control model used. We investigated the validity of this assumption by deriving analytic relationships between the α/β estimates from a homogeneous TCP model, ignoring interpatient heterogeneity, and those of the corresponding heterogeneous (population-averaged) model that incorporated heterogeneity. Methods and Materials: The homogeneous and heterogeneous TCP models can both be written in terms of the geometric parameters D 50 and γ 50 . We show that the functional forms of these models are similar. This similarity was used to develop an expression relating the homogeneous and heterogeneous estimates for the α/β ratio. The expression was verified numerically by generating pseudo-data from a TCP curve with known parameters and then using the homogeneous and heterogeneous TCP models to estimate the α/β ratio for the pseudo-data. Results: When the dominant form of interpatient heterogeneity is that of radiosensitivity, the homogeneous and heterogeneous α/β estimates differ. This indicates that the presence of this heterogeneity affects the value of the α/β ratio derived from analysis of TCP curves. Conclusions: The α/β ratio estimated from clinical dose-response data is model dependent-a heterogeneous TCP model that accounts for heterogeneity in radiosensitivity will produce a greater α/β estimate than that resulting from a homogeneous TCP model

  12. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  13. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  14. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  15. Microscopic Derivation of the Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Frank, Rupert; Hainzl, Christian; Seiringer, Robert

    2014-01-01

    We present a summary of our recent rigorous derivation of the celebrated Ginzburg-Landau (GL) theory, starting from the microscopic Bardeen-Cooper-Schrieffer (BCS) model. Close to the critical temperature, GL arises as an effective theory on the macroscopic scale. The relevant scaling limit...

  16. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  17. Investigating ESD sensitivity in electrostatic SiGe MEMS

    International Nuclear Information System (INIS)

    Sangameswaran, Sandeep; De Coster, Jeroen; Linten, Dimitri; Scholz, Mirko; Thijs, Steven; Groeseneken, Guido; De Wolf, Ingrid

    2010-01-01

    The sensitivity of electrostatically actuated SiGe microelectromechanical systems to electrostatic discharge events has been investigated in this paper. Torsional micromirrors and RF microelectromechanical systems (MEMS) actuators have been used as two case studies to perform this study. On-wafer electrostatic discharge (ESD) measurement methods, such as the human body model (HBM) and machine model (MM), are discussed. The impact of HBM ESD zap tests on the functionality and behavior of MEMS is explained and the ESD failure levels of MEMS have been verified by failure analysis. It is demonstrated that electrostatic MEMS devices have a high sensitivity to ESD and that it is essential to protect them.

  18. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  19. Unsteady Vibration Aerodynamic Modeling and Evaluation of Dynamic Derivatives Using Computational Fluid Dynamics

    Directory of Open Access Journals (Sweden)

    Xu Liu

    2015-01-01

    Full Text Available Unsteady aerodynamic system modeling is widely used to solve the dynamic stability problems encountering aircraft design. In this paper, single degree-of-freedom (SDF vibration model and forced simple harmonic motion (SHM model for dynamic derivative prediction are developed on the basis of modified Etkin model. In the light of the characteristics of SDF time domain solution, the free vibration identification methods for dynamic stability parameters are extended and applied to the time domain numerical simulation of blunted cone calibration model examples. The dynamic stability parameters by numerical identification are no more than 0.15% deviated from those by experimental simulation, confirming the correctness of SDF vibration model. The acceleration derivatives, rotary derivatives, and combination derivatives of Army-Navy Spinner Rocket are numerically identified by using unsteady N-S equation and solving different SHV patterns. Comparison with the experimental result of Army Ballistic Research Laboratories confirmed the correctness of the SHV model and dynamic derivative identification. The calculation result of forced SHM is better than that by the slender body theory of engineering approximation. SDF vibration model and SHM model for dynamic stability parameters provide a solution to the dynamic stability problem encountering aircraft design.

  20. Derivation of a northern-hemispheric biomass map for use in global carbon cycle models

    Science.gov (United States)

    Thurner, Martin; Beer, Christian; Santoro, Maurizio; Carvalhais, Nuno; Wutzler, Thomas; Schepaschenko, Dmitry; Shvidenko, Anatoly; Kompter, Elisabeth; Levick, Shaun; Schmullius, Christiane

    2013-04-01

    Quantifying the state and the change of the World's forests is crucial because of their ecological, social and economic value. Concerning their ecological importance, forests provide important feedbacks on the global carbon, energy and water cycles. In addition to their influence on albedo and evapotranspiration, they have the potential to sequester atmospheric carbon dioxide and thus to mitigate global warming. The current state and inter-annual variability of forest carbon stocks remain relatively unexplored, but remote sensing can serve to overcome this shortcoming. While for the tropics wall-to-wall estimates of above-ground biomass have been recently published, up to now there was a lack of similar products covering boreal and temperate forests. Recently, estimates of forest growing stock volume (GSV) were derived from ENVISAT ASAR C-band data for latitudes above 30° N. Utilizing a wood density and a biomass compartment database, a forest carbon density map covering North-America, Europe and Asia with 0.01° resolution could be derived out of this dataset. Allometric functions between stem, branches, root and foliage biomass were fitted and applied for different leaf types (broadleaf, needleleaf deciduous, needleleaf evergreen forest). Additionally, this method enabled uncertainty estimation of the resulting carbon density map. Intercomparisons with inventory-based biomass products in Russia, Europe and the USA proved the high accuracy of this approach at a regional scale (r2 = 0.70 - 0.90). Based on the final biomass map, the forest carbon stocks and densities (excluding understorey vegetation) for three biomes were estimated across three continents. While 40.7 ± 15.7 Gt of carbon were found to be stored in boreal forests, temperate broadleaf/mixed forests and temperate conifer forests contain 24.5 ± 9.4 Gt(C) and 14.5 ± 4.8 Gt(C), respectively. In terms of carbon density, most of the carbon per area is stored in temperate conifer (62.1 ± 20.7 Mg

  1. Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations

    Directory of Open Access Journals (Sweden)

    Lenny Suardi

    2014-08-01

    Full Text Available This  paper  applies  the  maximum  likelihood  (ML  approaches  to  implementing  the structural  model  of  corporate  bond,  as  suggested  by  Li  and  Wong  (2008,  in  Indonesian corporations.  Two  structural  models,  extended  Merton  and  Longstaff  &  Schwartz  (LS models,  are  used  in  determining  these  prices,  yields,  yield  spreads  and  probabilities  of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity,  total  liabilities,  bond  prices  data  and  the  irm's  parameters  (irm  value,  volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during  period  of  2001-2005,  based  on  criteria  of  Eom,  Helwege  and  Huang  (2004.  The equity  and  bond  prices  data  were  obtained  from  Indonesia  Stock  Exchange  for  irms  that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript

  2. Using the Health Belief Model to Explain Mothers' and Fathers' Intention to Participate in Universal Parenting Programs.

    Science.gov (United States)

    Salari, Raziye; Filus, Ania

    2017-01-01

    Using the Health Belief Model (HBM) as a theoretical framework, we studied factors related to parental intention to participate in parenting programs and examined the moderating effects of parent gender on these factors. Participants were a community sample of 290 mothers and 290 fathers of 5- to 10-year-old children. Parents completed a set of questionnaires assessing child emotional and behavioral difficulties and the HBM constructs concerning perceived program benefits and barriers, perceived child problem susceptibility and severity, and perceived self-efficacy. The hypothesized model was evaluated using structural equation modeling. The results showed that, for both mothers and fathers, perceived program benefits were associated with higher intention to participate in parenting programs. In addition, higher intention to participate was associated with lower perceived barriers only in the sample of mothers and with higher perceived self-efficacy only in the sample of fathers. No significant relations were found between intention to participate and perceived child problem susceptibility and severity. Mediation analyses indicated that, for both mothers and fathers, child emotional and behavioral problems had an indirect effect on parents' intention to participate by increasing the level of perceived benefits of the program. As a whole, the proposed model explained about 45 % of the variance in parental intention to participate. The current study suggests that mothers and fathers may be motivated by different factors when making their decision to participate in a parenting program. This finding can inform future parent engagement strategies intended to increase both mothers' and fathers' participation rates in parenting programs.

  3. Estimates for the mixed derivatives of the Green functions on homogeneous manifolds of negative curvature

    Directory of Open Access Journals (Sweden)

    Roman Urban

    2004-12-01

    Full Text Available We consider the Green functions for second-order left-invariant differential operators on homogeneous manifolds of negative curvature, being a semi-direct product of a nilpotent Lie group $N$ and $A=mathbb{R}^+$. We obtain estimates for mixed derivatives of the Green functions both in the coercive and non-coercive case. The current paper completes the previous results obtained by the author in a series of papers [14,15,16,19].

  4. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad; Hoteit, Ibrahim

    2014-01-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  5. Complex step-based low-rank extended Kalman filtering for state-parameter estimation in subsurface transport models

    KAUST Repository

    El Gharamti, Mohamad

    2014-02-01

    The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.

  6. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  7. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  8. Normal Mode Derived Models of the Physical Properties of Earth's Outer Core

    Science.gov (United States)

    Irving, J. C. E.; Cottaar, S.; Lekic, V.; Wu, W.

    2017-12-01

    Earth's outer core, the largest reservoir of metal in our planet, is comprised of an iron alloy of an uncertain composition. Its dynamical behaviour is responsible for the generation of Earth's magnetic field, with convection driven both by thermal and chemical buoyancy fluxes. Existing models of the seismic velocity and density of the outer core exhibit some variation, and there are only a small number of models which aim to represent the outer core's density.It is therefore important that we develop a better understanding of the physical properties of the outer core. Though most of the outer core is likely to be well mixed, it is possible that the uppermost outer core is stably stratified: it may be enriched in light elements released during the growth of the solid, iron enriched, inner core; by elements dissolved from the mantle into the outer core; or by exsolution of compounds previously dissolved in the liquid metal which will eventually be swept into the mantle. The stratified layer may host MAC or Rossby waves and it could impede communication between the chemically differentiated mantle and outer core, including screening out some of the geodynamo's signal. We use normal mode center frequencies to estimate the physical properties of the outer core in a Bayesian framework. We estimate the mineral physical parameters needed to best produce velocity and density models of the outer core which are consistent with the normal mode observations. We require that our models satisfy realistic physical constraints. We create models of the outer core with and without a distinct uppermost layer and assess the importance of this region.Our normal mode-derived models are compared with observations of body waves which travel through the outer core. In particular, we consider SmKS waves which are especially sensitive to the uppermost outer core and are therefore an important way to understand the robustness of our models.

  9. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  10. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  11. Estimates for mild solutions to semilinear Cauchy problems

    Directory of Open Access Journals (Sweden)

    Kresimir Burazin

    2014-09-01

    Full Text Available The existence (and uniqueness results on mild solutions of the abstract semilinear Cauchy problems in Banach spaces are well known. Following the results of Tartar (2008 and Burazin (2008 in the case of decoupled hyperbolic systems, we give an alternative proof, which enables us to derive an estimate on the mild solution and its time of existence. The nonlinear term in the equation is allowed to be time-dependent. We discuss the optimality of the derived estimate by testing it on three examples: the linear heat equation, the semilinear heat equation that models dynamic deflection of an elastic membrane, and the semilinear Schrodinger equation with time-dependent nonlinearity, that appear in the modelling of numerous physical phenomena.

  12. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  13. On a derivation of the Salam-Weinberg model

    International Nuclear Information System (INIS)

    Squires, E.J.

    1979-01-01

    It is shown how the graded Lie-algebra structure of a recent derivation of the Salam-Weinberg model might arise from the form of allowed transformations on the lepton lagrangian in a 6-dimensional space. The possibility that the model might allow two identically coupled leptonic sectors, and others in which the chiralites are reversed, are discussed. (Auth.)

  14. Surface Soil Moisture Memory Estimated from Models and SMAP Observations

    Science.gov (United States)

    He, Q.; Mccoll, K. A.; Li, C.; Lu, H.; Akbar, R.; Pan, M.; Entekhabi, D.

    2017-12-01

    Soil moisture memory(SMM), which is loosely defined as the time taken by soil to forget an anomaly, has been proved to be important in land-atmosphere interaction. There are many metrics to calculate the SMM timescale, for example, the timescale based on the time-series autocorrelation, the timescale ignoring the soil moisture time series and the timescale which only considers soil moisture increment. Recently, a new timescale based on `Water Cycle Fraction' (Kaighin et al., 2017), in which the impact of precipitation on soil moisture memory is considered, has been put up but not been fully evaluated in global. In this study, we compared the surface SMM derived from SMAP observations with that from land surface model simulations (i.e., the SMAP Nature Run (NR) provided by the Goddard Earth Observing System, version 5) (Rolf et al., 2014). Three timescale metrics were used to quantify the surface SMM as: T0 based on the soil moisture time series autocorrelation, deT0 based on the detrending soil moisture time series autocorrelation, and tHalf based on the Water Cycle Fraction. The comparisons indicate that: (1) there are big gaps between the T0 derived from SMAP and that from NR (2) the gaps get small for deT0 case, in which the seasonality of surface soil moisture was removed with a moving average filter; (3) the tHalf estimated from SMAP is much closer to that from NR. The results demonstrate that surface SMM can vary dramatically among different metrics, while the memory derived from land surface model differs from the one from SMAP observation. tHalf, with considering the impact of precipitation, may be a good choice to quantify surface SMM and have high potential in studies related to land atmosphere interactions. References McColl. K.A., S.H. Alemohammad, R. Akbar, A.G. Konings, S. Yueh, D. Entekhabi. The Global Distribution and Dynamics of Surface Soil Moisture, Nature Geoscience, 2017 Reichle. R., L. Qing, D.L. Gabrielle, A. Joe. The "SMAP_Nature_v03" Data

  15. One loop beta functions and fixed points in higher derivative sigma models

    International Nuclear Information System (INIS)

    Percacci, Roberto; Zanusso, Omar

    2010-01-01

    We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.

  16. River Discharge Estimation by Using Altimetry Data and Simplified Flood Routing Modeling

    Directory of Open Access Journals (Sweden)

    Tommaso Moramarco

    2013-08-01

    Full Text Available A methodology to estimate the discharge along rivers, even poorly gauged ones, taking advantage of water level measurements derived from satellite altimetry is proposed. The procedure is based on the application of the Rating Curve Model (RCM, a simple method allowing for the estimation of the flow conditions in a river section using only water levels recorded at that site and the discharges observed at another upstream section. The European Remote-Sensing Satellite 2, ERS-2, and the Environmental Satellite, ENVISAT, altimetry data are used to provide time series of water levels needed for the application of RCM. In order to evaluate the usefulness of the approach, the results are compared with the ones obtained by applying an empirical formula that allows discharge estimation from remotely sensed hydraulic information. To test the proposed procedure, the 236 km-reach of the Po River is investigated, for which five in situ stations and four satellite tracks are available. Results show that RCM is able to appropriately represent the discharge, and its performance is better than the empirical formula, although this latter does not require upstream hydrometric data. Given its simple formal structure, the proposed approach can be conveniently utilized in ungauged sites where only the survey of the cross-section is needed.

  17. Control of anode supported SOFCs (solid oxide fuel cells): Part I. mathematical modeling and state estimation within one cell

    International Nuclear Information System (INIS)

    Amedi, Hamid Reza; Bazooyar, Bahamin; Pishvaie, Mahmoud Reza

    2015-01-01

    In this paper, a 3-dimensional mathematical model for one cell of an anode-supported SOFC (solid oxide fuel cells) is presented. The model is derived from the partial differential equations representing the conservation laws of ionic and electronic charges, mass, energy, and momentum. The model is implemented to fully characterize the steady state operation of the cell with countercurrent flow pattern of fuel and air. The model is also used for the comparison of countercurrent with concurrent flow patterns in terms of thermal stress (temperature distribution) and quality of operation (current density). Results reveal that the steady-state cell performance curve and output of simulations qualitatively match experimental data of the literature. Results also demonstrate that countercurrent flow pattern leads to an even distribution of temperature, more uniform current density along the cell and thus is more enduring and superior to the concurrent flow pattern. Afterward, the thorough 3-dimensional model is used for state estimation instead of a real cell. To estimate states, the model is simplified and changed to a 1-dimensional model along flow streams. This simplified model includes uncertainty (because of simplifying assumptions of the model), noise, and disturbance (because of measurements). The behaviors of extended and ensemble Kalman filter as an observer are evaluated in terms of estimating the states and filtering the noises. Results demonstrate that, like extended Kalman filter, ensemble Kalman filter properly estimates the states with 20 sets. - Highlights: • A 3-dimensional model for one cell of SOFC (solid oxide fuel cells) is presented. • Higher voltages and thermal stress in countercurrent than concurrent flow pattern. • State estimation of the cell is examined by ensemble and extended Kalman filters. • Ensemble with 20 sets is as good as extended Kalman filter.

  18. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  19. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  20. Calibration of the Diameter Distribution Derived from the Area-based Approach with Individual Tree-based Diameter Estimates Using the Airborne Laser Scanning

    Science.gov (United States)

    Xu, Q.; Hou, Z.; Maltamo, M.; Tokola, T.

    2015-12-01

    Diameter distributions of trees are important indicators of current forest stand structure and future dynamics. A new method was proposed in the study to combine the diameter distributions derived from the area-based approach (ABA) and the diameter distribution derived from the individual tree detection (ITD) in order to obtain more accurate forest stand attributes. Since dominant trees can be reliably detected and measured by the Lidar data via the ITD, the focus of the study is to retrieve the suppressed trees (trees that were missed by the ITD) from the ABA. Replacement and histogram matching were respectively employed at the plot level to retrieve the suppressed trees. Cut point was detected from the ITD-derived diameter distribution for each sample plot to distinguish dominant trees from the suppressed trees. The results showed that calibrated diameter distributions were more accurate in terms of error index and the entire growing stock estimates. Compared with the best performer between the ABA and the ITD, calibrated diameter distributions decreased the relative RMSE of the estimated entire growing stock, saw log and pulpwood fractions by 2.81%, 3.05% and 7.73% points respectively. Calibration improved the estimation of pulpwood fraction significantly, resulting in a negligible bias of the estimated entire growing stock.