WorldWideScience

Sample records for include developing estimates

  1. Including estimates of the future in today's financial statements

    OpenAIRE

    Mary Barth

    2006-01-01

    This paper explains why the question is how, not if, today's financial statements should include estimates of the future. Including such estimates is not new, but their use is increasing. This increase results primarily because standard setters believe asset and liability measures that reflect current economic conditions and up-to-date expectations of the future will result in more useful information for making economic decisions, which is the objective of financial reporting. This is why sta...

  2. Age estimation in the living: Transition analysis on developing third molars.

    Science.gov (United States)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels; Willems, Guy; Boldsen, Jesper

    2015-12-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation sample to test the age estimates against actual age. The mean inaccuracy of the age estimates was 1.82 years (±1.35) in males and 1.81 years (±1.44) in females. The mean bias was 0.55 years (±2.20) in males and 0.31 years (±2.30) in females. Of the actual ages, 93.7% of the males and 95.9% of the females (validation sample) fell within the 95% prediction interval. Moreover, at a sensitivity and specificity of 0.824 and 0.937 in males and 0.814 and 0.827 in females, TA performs well in differentiating between being a minor as opposed to an adult. Although accuracy does not outperform other methods, TA provides unbiased age estimates which minimize the risk of wrongly estimating minors as adults. Furthermore, when corrected ad hoc, TA produces appropriate prediction intervals. As TA allows expansion with additional traits, i.e. stages of development of the left hand-wrist and the clavicle, it has a great potential for future more accurate and reproducible age estimates, including an estimated probability of having attained the legal age limit of 18 years. Copyright © 2015

  3. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    Science.gov (United States)

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  4. COSTMODL - AN AUTOMATED SOFTWARE DEVELOPMENT COST ESTIMATION TOOL

    Science.gov (United States)

    Roush, G. B.

    1994-01-01

    The cost of developing computer software consumes an increasing portion of many organizations' budgets. As this trend continues, the capability to estimate the effort and schedule required to develop a candidate software product becomes increasingly important. COSTMODL is an automated software development estimation tool which fulfills this need. Assimilating COSTMODL to any organization's particular environment can yield significant reduction in the risk of cost overruns and failed projects. This user-customization capability is unmatched by any other available estimation tool. COSTMODL accepts a description of a software product to be developed and computes estimates of the effort required to produce it, the calendar schedule required, and the distribution of effort and staffing as a function of the defined set of development life-cycle phases. This is accomplished by the five cost estimation algorithms incorporated into COSTMODL: the NASA-developed KISS model; the Basic, Intermediate, and Ada COCOMO models; and the Incremental Development model. This choice affords the user the ability to handle project complexities ranging from small, relatively simple projects to very large projects. Unique to COSTMODL is the ability to redefine the life-cycle phases of development and the capability to display a graphic representation of the optimum organizational structure required to develop the subject project, along with required staffing levels and skills. The program is menu-driven and mouse sensitive with an extensive context-sensitive help system that makes it possible for a new user to easily install and operate the program and to learn the fundamentals of cost estimation without having prior training or separate documentation. The implementation of these functions, along with the customization feature, into one program makes COSTMODL unique within the industry. COSTMODL was written for IBM PC compatibles, and it requires Turbo Pascal 5.0 or later and Turbo

  5. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  6. Dental Age Estimation (DAE): Data management for tooth development stages including the third molar. Appropriate censoring of Stage H, the final stage of tooth development.

    Science.gov (United States)

    Roberts, Graham J; McDonald, Fraser; Andiappan, Manoharan; Lucas, Victoria S

    2015-11-01

    The final stage of dental development of third molars is usually helpful to indicate whether or not a subject is aged over 18 years. A complexity is that the final stage of development is unlimited in its upper border. Investigators usually select an inappropriate upper age limit or censor point for this tooth development stage. The literature was searched for appropriate data sets for dental age estimation and those that provided the count (n), the mean (x¯), and the standard deviation (sd) for each of the tooth development stages. The Demirjian G and Demirjian H were used for this study. Upper and lower limits of the Stage G and Stage H data were calculated limiting the data to plus or minus three standard deviations from the mean. The upper border of Stage H was limited by appropriate censoring at the maximum value for Stage G. The maximum age at attainment from published data, for Stage H, ranged from 22.60 years to 34.50 years. These data were explored to demonstrate how censoring provides an estimate for the correct maximum age for the final stage of Stage H as 21.64 years for UK Caucasians. This study shows that confining the data array of individual tooth developments stages to ± 3sd provides a reliable and logical way of censoring the data for tooth development stages with a Normal distribution of data. For Stage H this is inappropriate as it is unbounded in its upper limit. The use of a censored data array for Stage H using Percentile values is appropriate. This increases the reliability of using third molar Stage H alone to determine whether or not an individual is over 18 years old. For Stage H, individual ancestral groups should be censored using the same technique. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Best estimate radiation heat transfer model developed for TRAC-BD1

    International Nuclear Information System (INIS)

    Spore, J.W.; Giles, M.M.; Shumway, R.W.

    1981-01-01

    A best estimate radiation heat transfer model for analysis of BWR fuel bundles has been developed and compared with 8 x 8 fuel bundle data. The model includes surface-to-surface and surface-to-two-phase fluid radiation heat transfer. A simple method of correcting for anisotropic reflection effects has been included in the model

  8. Estimates and implications of the costs of compliance with biosafety regulations in developing countries.

    Science.gov (United States)

    Falck-Zepeda, Jose; Yorobe, Jose; Husin, Bahagiawati Amir; Manalo, Abraham; Lokollo, Erna; Ramon, Godfrey; Zambrano, Patricia; Sutrisno

    2012-01-01

    Estimating the cost of compliance with biosafety regulations is important as it helps developers focus their investments in producer development. We provide estimates for the cost of compliance for a set of technologies in Indonesia, the Philippines and other countries. These costs vary from US $100,000 to 1.7 million. These are estimates of regulatory costs and do not include product development or deployment costs. Cost estimates need to be compared with potential gains when the technology is introduced in these countries and the gains in knowledge accumulate during the biosafety assessment process. Although the cost of compliance is important, time delays and uncertainty are even more important and may have an adverse impact on innovations reaching farmers.

  9. Development of realtime cognitive state estimator

    International Nuclear Information System (INIS)

    Takahashi, Makoto; Kitamura, Masashi; Yoshikaea, Hidekazu

    2004-01-01

    The realtime cognitive state estimator based on the set of physiological measures has been developed in order to provide valuable information on the human behavior during the interaction through the Man-Machine Interface. The artificial neural network has been adopted to categorize the cognitive states by using the qualitative physiological data pattern as the inputs. The laboratory experiments, in which the subjects' cognitive states were intentionally controlled by the task presented, were performed to obtain training data sets for the neural network. The developed system has been shown to be capable of estimating cognitive state with higher accuracy and realtime estimation capability has also been confirmed through the data processing experiments. (author)

  10. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  11. Development of a kinetic model, including rate constant estimations, on iodine and caesium behaviour in the primary circuit of LWR's under accident conditions

    International Nuclear Information System (INIS)

    Alonso, A.; Buron, J.M.; Fernandez, S.

    1991-07-01

    In this report, a kinetic model has been developed with the aim to try to reproduce the chemical phenomena that take place in a flowing system containing steam, hydrogen and iodine and caesium vapours. The work is divided into two different parts. The first part consists in the estimation, through the Activited Complex Theory, of the reaction rate constants, for the chosen reactions, and the development of the kinetic model based on the concept of ideal tubular chemical reactor. The second part deals with the application of such model to several cases, which were taken from the Phase B 'Scoping Calculations' of the Phebus-FP Project (sequence AB) and the SFD-ST and SFD1.1 experiments. The main conclusion obtained from this work is that the assumption of instantaneous equilibrium could be inacurrate in order to estimate the iodine and caesium species distribution under severe accidents conditions

  12. Development of Numerical Estimation in Young Children

    Science.gov (United States)

    Siegler, Robert S.; Booth, Julie L.

    2004-01-01

    Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…

  13. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  14. Loss of life estimation – Review, developments and challenges

    Directory of Open Access Journals (Sweden)

    Jonkman S.N. (Bas

    2016-01-01

    Full Text Available This paper presents an overview and review of methods developed for loss of life estimation in flood risk assessment. These methods range from empirical to simulation based approaches that are used to support flood risk analyses and emergency management. Similarities and differences between the modelling approaches, input and output types and applications are discussed. Challenges to the field are summarized, including empirical data collection for validation and benchmarking and comparison studies.

  15. Handbook for quick cost estimates. A method for developing quick approximate estimates of costs for generic actions for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Ball, J.R.

    1986-04-01

    This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided.

  16. Handbook for quick cost estimates. A method for developing quick approximate estimates of costs for generic actions for nuclear power plants

    International Nuclear Information System (INIS)

    Ball, J.R.

    1986-04-01

    This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided

  17. A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools

    Science.gov (United States)

    Wang, Jianzhong Jay; Datta, Koushik; Landis, Michael R. (Technical Monitor)

    2002-01-01

    This paper describes the development of a life-cycle cost (LCC) estimating methodology for air traffic control Decision Support Tools (DSTs) under development by the National Aeronautics and Space Administration (NASA), using a combination of parametric, analogy, and expert opinion methods. There is no one standard methodology and technique that is used by NASA or by the Federal Aviation Administration (FAA) for LCC estimation of prospective Decision Support Tools. Some of the frequently used methodologies include bottom-up, analogy, top-down, parametric, expert judgement, and Parkinson's Law. The developed LCC estimating methodology can be visualized as a three-dimensional matrix where the three axes represent coverage, estimation, and timing. This paper focuses on the three characteristics of this methodology that correspond to the three axes.

  18. Technology Estimating: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    Science.gov (United States)

    Cole, Stuart K.; Reeves, John D.; Williams-Byrd, Julie A.; Greenberg, Marc; Comstock, Doug; Olds, John R.; Wallace, Jon; DePasquale, Dominic; Schaffer, Mark

    2013-01-01

    NASA is investing in new technologies that include 14 primary technology roadmap areas, and aeronautics. Understanding the cost for research and development of these technologies and the time it takes to increase the maturity of the technology is important to the support of the ongoing and future NASA missions. Overall, technology estimating may help provide guidance to technology investment strategies to help improve evaluation of technology affordability, and aid in decision support. The research provides a summary of the framework development of a Technology Estimating process where four technology roadmap areas were selected to be studied. The framework includes definition of terms, discussion for narrowing the focus from 14 NASA Technology Roadmap areas to four, and further refinement to include technologies, TRL range of 2 to 6. Included in this paper is a discussion to address the evaluation of 20 unique technology parameters that were initially identified, evaluated and then subsequently reduced for use in characterizing these technologies. A discussion of data acquisition effort and criteria established for data quality are provided. The findings obtained during the research included gaps identified, and a description of a spreadsheet-based estimating tool initiated as a part of the Technology Estimating process.

  19. Estimating software development project size, using probabilistic ...

    African Journals Online (AJOL)

    Estimating software development project size, using probabilistic techniques. ... of managing the size of software development projects by Purchasers (Clients) and Vendors (Development ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  20. Modeling of magnetic fields on a cylindrical surface and associated parameter estimation for development of a size sensor

    International Nuclear Information System (INIS)

    Zhang, Song; Rajamani, Rajesh

    2016-01-01

    This paper develops analytical sensing principles for estimation of circumferential size of a cylindrical surface using magnetic sensors. An electromagnet and magnetic sensors are used on a wearable band for measurement of leg size. In order to enable robust size estimation during rough real-world use of the wearable band, three estimation algorithms are developed based on models of the magnetic field variation over a cylindrical surface. The magnetic field models developed include those for a dipole and for a uniformly magnetized cylinder. The estimation algorithms used include a linear regression equation, an extended Kalman filter and an unscented Kalman filter. Experimental laboratory tests show that the size sensor in general performs accurately, yielding sub-millimeter estimation errors. The unscented Kalman filter yields the best performance that is robust to bias and misalignment errors. The size sensor developed herein can be used for monitoring swelling due to fluid accumulation in the lower leg and a number of other biomedical applications. (paper)

  1. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Directory of Open Access Journals (Sweden)

    Xudong Zhang

    Full Text Available The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  2. A hierarchical estimator development for estimation of tire-road friction coefficient.

    Science.gov (United States)

    Zhang, Xudong; Göhlich, Dietmar

    2017-01-01

    The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.

  3. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    Science.gov (United States)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  4. Improved best estimate plus uncertainty methodology, including advanced validation concepts, to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, C.; Williams, B.; Hemez, F.; Atamturktur, S.H.; McClure, P.

    2011-01-01

    Research highlights: → The best estimate plus uncertainty methodology (BEPU) is one option in the licensing of nuclear reactors. → The challenges for extending the BEPU method for fuel qualification for an advanced reactor fuel are primarily driven by schedule, the need for data, and the sufficiency of the data. → In this paper we develop an extended BEPU methodology that can potentially be used to address these new challenges in the design and licensing of advanced nuclear reactors. → The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. → The methodology includes a formalism to quantify an adequate level of validation (predictive maturity) with respect to existing data, so that required new testing can be minimized, saving cost by demonstrating that further testing will not enhance the quality of the predictive tools. - Abstract: Many evolving nuclear energy technologies use advanced predictive multiscale, multiphysics modeling and simulation (M and S) capabilities to reduce the cost and schedule of design and licensing. Historically, the role of experiments has been as a primary tool for the design and understanding of nuclear system behavior, while M and S played the subordinate role of supporting experiments. In the new era of multiscale, multiphysics computational-based technology development, this role has been reversed. The experiments will still be needed, but they will be performed at different scales to calibrate and validate the models leading to predictive simulations for design and licensing. Minimizing the required number of validation experiments produces cost and time savings. The use of multiscale, multiphysics models introduces challenges in validating these predictive tools - traditional methodologies will have to be modified to address these challenges. This paper gives the basic aspects of a methodology that can potentially be used to address these new challenges in

  5. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  6. The Underground Economy and GDP Estimation in Developing ...

    African Journals Online (AJOL)

    The estimation of gross domestic product (GDP) in most developing countries portrays a lot of meaning; most often it is very low. This could be true or false. The existence of underground economy in this economies tend to undermine the estimation of GDP in developing economies, because the size of such economy is ...

  7. Estimation of πd-Interactions in Organic Conductors Including Magnetic Anions

    Science.gov (United States)

    Mori, Takehiko; Katsuhara, Mao

    2002-03-01

    Magnetic interactions in organic conductors including magnetic anions, such as λ-(BETS)2FeCl4 and κ-(BETS)2FeX4 [X = Cl and Br], are estimated from intermolecular overlap integrals; the overlaps between anions afford Jdd, and those between anions and donors give Jπ d. From this, the most stable spin alignments are decided, and such quantities as the Néel and Weiss temperatures, as well as the magnitude of spin polarization on the π-molecules are evaluated on the basis of the mean-field theory of πd-systems. The calculation is extended to several other πd-conductors, which are classified depending on the relative magnitudes of the direct dd- and indirect πd-interactions.

  8. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  9. Feasibility of including fugitive PM-10 emissions estimates in the EPA emissions trends report

    International Nuclear Information System (INIS)

    Barnard, W.; Carlson, P.

    1990-09-01

    The report describes the results of Part 2 of a two part study. Part 2 was to evaluate the feasibility of developing regional emission trends for PM-10. Part 1 was to evaluate the feasibility of developing VOC emission trends, on a regional and temporal basis. These studies are part of the effort underway to improve the national emission trends. Part 1 is presented in a separate report. The categories evaluated for the feasibility of developing regional emissions estimates were: unpaved roads, paved roads, wind erosion, agricultural tilling, construction activities, feedlots, burning, landfills, mining and quarrying unpaved parking lots, unpaved airstrips and storage piles

  10. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    International Nuclear Information System (INIS)

    Unal, Cetin; Williams, Brian; McClure, Patrick; Nelson, Ralph A.

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M and S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for

  11. Improved best estimate plus uncertainty methodology including advanced validation concepts to license evolving nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Unal, Cetin [Los Alamos National Laboratory; Williams, Brian [Los Alamos National Laboratory; Mc Clure, Patrick [Los Alamos National Laboratory; Nelson, Ralph A [IDAHO NATIONAL LAB

    2010-01-01

    Many evolving nuclear energy programs plan to use advanced predictive multi-scale multi-physics simulation and modeling capabilities to reduce cost and time from design through licensing. Historically, the role of experiments was primary tool for design and understanding of nuclear system behavior while modeling and simulation played the subordinate role of supporting experiments. In the new era of multi-scale multi-physics computational based technology development, the experiments will still be needed but they will be performed at different scales to calibrate and validate models leading predictive simulations. Cost saving goals of programs will require us to minimize the required number of validation experiments. Utilization of more multi-scale multi-physics models introduces complexities in the validation of predictive tools. Traditional methodologies will have to be modified to address these arising issues. This paper lays out the basic aspects of a methodology that can be potentially used to address these new challenges in design and licensing of evolving nuclear technology programs. The main components of the proposed methodology are verification, validation, calibration, and uncertainty quantification. An enhanced calibration concept is introduced and is accomplished through data assimilation. The goal is to enable best-estimate prediction of system behaviors in both normal and safety related environments. To achieve this goal requires the additional steps of estimating the domain of validation and quantification of uncertainties that allow for extension of results to areas of the validation domain that are not directly tested with experiments, which might include extension of the modeling and simulation (M&S) capabilities for application to full-scale systems. The new methodology suggests a formalism to quantify an adequate level of validation (predictive maturity) with respect to required selective data so that required testing can be minimized for cost

  12. Development of Cost Estimation Methodology of Decommissioning for PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il; Yoo, Yeon Jae; Lim, Yong Kyu; Chang, Hyeon Sik; Song, Geun Ho

    2013-01-01

    The permanent closure of nuclear power plant should be conducted with the strict laws and the profound planning including the cost and schedule estimation because the plant is very contaminated with the radioactivity. In Korea, there are two types of the nuclear power plant. One is the pressurized light water reactor (PWR) and the other is the pressurized heavy water reactor (PHWR) called as CANDU reactor. Also, the 50% of the operating nuclear power plant in Korea is the PWRs which were originally designed by CE (Combustion Engineering). There have been experiences about the decommissioning of Westinghouse type PWR, but are few experiences on that of CE type PWR. Therefore, the purpose of this paper is to develop the cost estimation methodology and evaluate technical level of decommissioning for the application to CE type PWR based on the system engineering technology. The aim of present study is to develop the cost estimation methodology of decommissioning for application to PWR. Through the study, the following conclusions are obtained: · Based on the system engineering, the decommissioning work can be classified as Set, Subset, Task, Subtask and Work cost units. · The Set and Task structure are grouped as 29 Sets and 15 Task s, respectively. · The final result shows the cost and project schedule for the project control and risk management. · The present results are preliminary and should be refined and improved based on the modeling and cost data reflecting available technology and current costs like labor and waste data

  13. Estimating the cost of cervical cancer screening in five developing countries

    Directory of Open Access Journals (Sweden)

    Goldie Sue J

    2006-08-01

    Full Text Available Abstract Background Cost-effectiveness analyses (CEAs can provide useful information to policymakers concerned with the broad allocation of resources as well as to local decision makers choosing between different options for reducing the burden from a single disease. For the latter, it is important to use country-specific data when possible and to represent cost differences between countries that might make one strategy more or less attractive than another strategy locally. As part of a CEA of cervical cancer screening in five developing countries, we supplemented limited primary cost data by developing other estimation techniques for direct medical and non-medical costs associated with alternative screening approaches using one of three initial screening tests: simple visual screening, HPV DNA testing, and cervical cytology. Here, we report estimation methods and results for three cost areas in which data were lacking. Methods To supplement direct medical costs, including staff, supplies, and equipment depreciation using country-specific data, we used alternative techniques to quantify cervical cytology and HPV DNA laboratory sample processing costs. We used a detailed quantity and price approach whose face validity was compared to an adaptation of a US laboratory estimation methodology. This methodology was also used to project annual sample processing capacities for each laboratory type. The cost of sample transport from the clinic to the laboratory was estimated using spatial models. A plausible range of the cost of patient time spent seeking and receiving screening was estimated using only formal sector employment and wages as well as using both formal and informal sector participation and country-specific minimum wages. Data sources included primary data from country-specific studies, international databases, international prices, and expert opinion. Costs were standardized to year 2000 international dollars using inflation adjustment and

  14. Best-estimate analysis development for BWR systems

    International Nuclear Information System (INIS)

    Sutherland, W.A.; Alamgir, M.; Kalra, S.P.; Beckner, W.D.

    1986-01-01

    The Full Integral Simulation Test (FIST) Program is a three pronged approach to the development of best-estimate analysis capability for BWR systems. An experimental program in the FIST BWR system simulator facility extends the LOCA data base and adds operational transients data. An analytical method development program with the BWR-TRAC computer program extends the modeling of BWR specific components and major interfacing systems, and improves numerical techniques to reduce computer running time. A method qualification program tests TRAC-B against experiments run in the FIST facility and extends the results to reactor system applications. With the completion and integration of these three activities, the objective of a best-estimate analysis capability has been achieved. (author)

  15. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    Science.gov (United States)

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-09-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rain rate. The optical sensors used in that study are designed for operating the windscreen wipers and showed promising results for rainfall measurement purposes. Their measurement accuracy has been quantified in laboratory experiments. Considering explicitly those errors, the main objective of this study is to investigate the benefit of using RCs for estimating areal rainfall. For that, computer experiments are carried out, where radar rainfall is considered as the reference and the other sources of data, i.e., RCs and rain gauges, are extracted from radar data. Comparing the quality of areal rainfall estimation by RCs with rain gauges and reference data helps to investigate the benefit of the RCs. The value of this additional source of data is not only assessed for areal rainfall estimation performance but also for use in hydrological modeling. Considering measurement errors derived from laboratory experiments, the result shows that the RCs provide useful additional information for areal rainfall estimation as well as for hydrological modeling. Moreover, by testing larger uncertainties for RCs, they observed to be useful up to a certain level for areal rainfall estimation and discharge simulation.

  16. Estimating the economic effects of cystic echinococcosis: Uruguay, a developing country with upper-middle income.

    Science.gov (United States)

    Torgerson, P R; Carmona, C; Bonifacino, R

    2000-10-01

    Cost-benefit analyses, run before the commencement of a programme to control a parasitic disease, should include estimates of the economic losses attributable to the disease. Uruguay, a middle-income, developing country, has a recent history of persistent problems with cystic echinococcosis, in both its human population and livestock. The economic effects in Uruguay of this disease, caused by the larval stage of the canine tapeworm Echinococcus granulosus, have now been evaluated. Data on the incidence of the disease, in humans and livestock, were used to construct cost estimates. The estimated minimum cost (U.S.$2.9 million/year) was based on the condemnation costs of infected offal together with the actual costs of the hospital treatment of the human cases. The estimate of the maximum cost (U.S.$22.1 million/year) also included the production losses resulting from lower livestock efficiency and the reduced income of individuals with morbidity attributable to the disease.

  17. Development of dose rate estimation system for FBR maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Iizawa, Katsuyuki [Japan Nuclear Cycle Development Inst., Tsuruga Head Office, International Cooperation and Technology Development Center, Tsuruga, Fukui (Japan); Takeuchi, Jun; Yoshikawa, Satoru [Hitachi Engineering Company, Ltd., Hitachi, Ibaraki (Japan); Urushihara, Hiroshi [Ibaraki Hitachi Information Service Co., Ltd., Omika, Ibaraki (Japan)

    2001-09-01

    During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)

  18. Development of dose rate estimation system for FBR maintenance

    International Nuclear Information System (INIS)

    Iizawa, Katsuyuki; Takeuchi, Jun; Yoshikawa, Satoru; Urushihara, Hiroshi

    2001-01-01

    During maintenance activities on the primary sodium cooling system by an FBR Personnel radiation exposure arises mainly from the presence of radioactive corrosion products (CP). A CP behavior analysis code, PSYCHE, and a radiation shielding calculation code, QAD-CG, have been developed and applied to investigate the possible reduction of radiation exposure of workers. In order to make these evaluation methods more accessible to plant engineers, the user interface of the codes has been improved and an integrated system, including visualization of the calculated gamma-ray radiation dose-rate map, has been developed. The system has been verified by evaluating the distribution of the radiation dose-rate within the Monju primary heat transport system cells from the estimated saturated CP deposition and distribution which would be present following about 20 cycles of full power operation. (author)

  19. Development of fragility functions to estimate homelessness after an earthquake

    Science.gov (United States)

    Brink, Susan A.; Daniell, James; Khazai, Bijan; Wenzel, Friedemann

    2014-05-01

    Immediately after an earthquake, many stakeholders need to make decisions about their response. These decisions often need to be made in a data poor environment as accurate information on the impact can take months or even years to be collected and publicized. Social fragility functions have been developed and applied to provide an estimate of the impact in terms of building damage, deaths and injuries in near real time. These rough estimates can help governments and response agencies determine what aid may be required which can improve their emergency response and facilitate planning for longer term response. Due to building damage, lifeline outages, fear of aftershocks, or other causes, people may become displaced or homeless after an earthquake. Especially in cold and dangerous locations, the rapid provision of safe emergency shelter can be a lifesaving necessity. However, immediately after an event there is little information available about the number of homeless, their locations and whether they require public shelter to aid the response agencies in decision making. In this research, we analyze homelessness after historic earthquakes using the CATDAT Damaging Earthquakes Database. CATDAT includes information on the hazard as well as the physical and social impact of over 7200 damaging earthquakes from 1900-2013 (Daniell et al. 2011). We explore the relationship of both earthquake characteristics and area characteristics with homelessness after the earthquake. We consider modelled variables such as population density, HDI, year, measures of ground motion intensity developed in Daniell (2014) over the time period from 1900-2013 as well as temperature. Using a base methodology based on that used for PAGER fatality fragility curves developed by Jaiswal and Wald (2010), but using regression through time using the socioeconomic parameters developed in Daniell et al. (2012) for "socioeconomic fragility functions", we develop a set of fragility curves that can be

  20. Development of computer program for estimating decommissioning cost - 59037

    International Nuclear Information System (INIS)

    Kim, Hak-Soo; Park, Jong-Kil

    2012-01-01

    The programs for estimating the decommissioning cost have been developed for many different purposes and applications. The estimation of decommissioning cost is required a large amount of data such as unit cost factors, plant area and its inventory, waste treatment, etc. These make it difficult to use manual calculation or typical spreadsheet software such as Microsoft Excel. The cost estimation for eventual decommissioning of nuclear power plants is a prerequisite for safe, timely and cost-effective decommissioning. To estimate the decommissioning cost more accurately and systematically, KHNP, Korea Hydro and Nuclear Power Co. Ltd, developed a decommissioning cost estimating computer program called 'DeCAT-Pro', which is Decommission-ing Cost Assessment Tool - Professional. (Hereinafter called 'DeCAT') This program allows users to easily assess the decommissioning cost with various decommissioning options. Also, this program provides detailed reporting for decommissioning funding requirements as well as providing detail project schedules, cash-flow, staffing plan and levels, and waste volumes by waste classifications and types. KHNP is planning to implement functions for estimating the plant inventory using 3-D technology and for classifying the conditions of radwaste disposal and transportation automatically. (authors)

  1. Taking the Evolutionary Road to Developing an In-House Cost Estimate

    Science.gov (United States)

    Jacintho, David; Esker, Lind; Herman, Frank; Lavaque, Rodolfo; Regardie, Myma

    2011-01-01

    This slide presentation reviews the process and some of the problems and challenges of developing an In-House Cost Estimate (IHCE). Using as an example the Space Network Ground Segment Sustainment (SGSS) project, the presentation reviews the phases for developing a Cost estimate within the project to estimate government and contractor project costs to support a budget request.

  2. A REVIEW OF ESTIMATION OF SOFTWARE PRODUCTS DEVELOPMENT COSTS

    Directory of Open Access Journals (Sweden)

    Edin Osmanbegović

    2017-01-01

    Full Text Available In the modern business and management of business processes, the standardization of procedures allows the creation of added value, increasing competitiveness and success in the business of an organization. Evaluation of the budget for software development is crucial to the success of an IT project, because the inability to make a realistic assessment leads to inadequate project plans, customer dissatisfaction, poor quality of software products, and reduced profits. In order to minimize such situations, making accurate and reliable software cost estimation should be carried out at all stages of the project life cycle. Although hundreds of research articles focusing on the application of different methods of budget estimates of the software product have been published so far, there is no comprehensive review of the current situation or review of research trends in the budget estimates of the software product. This paper aims to create a framework for estimation of costs of development of software products by providing an overview of the most influential researchers, the most influential articles published in the WoS database, the most used keywords for searching the articles, as well as a review of the estimation techniques used in budget estimates of the software product.

  3. Western Saudi adolescent age estimation utilising third molar development.

    Science.gov (United States)

    Alshihri, Amin M; Kruger, Estie; Tennant, Marc

    2014-07-01

    The aim of this study was to establish reference data on third molar morphology/development for age estimation in Western Saudi adolescents, between ages 14 and 23 years of old. The orthopantomograms of 130 individuals (males and females), were examined, and the stage of third molar development were evaluated. Mean ages, standard deviations, and percentile distributions are presented for each stage of development. The mean estimated age for all participants (n = 130) was 219.7 months, and this differed significantly (P age (226.5 months). Deviations of predicted age from real age showed 28.5% of all participants had their age estimated within 1 year (±12 months) of their chronological age. Most (43%) had their age underestimated by more than 12 months and the remaining 28.5% had their age overestimated by more than 12 months of their chronological age. Differences in left-right symmetry information of third molars were detected and were higher in the maxilla (92%) than in the mandible (82%). For all molars reaching stage "H" most individuals (males and females) were over the age 18 years of old. Males reach the developmental stages earlier than females. Third molar tooth development can be reliably used to generate mean age and the estimated age range for an individual of unknown chronological age. Further studies with large populations are needed for better statistical results.

  4. Coastal erosion's influencing factors include development, dams, wells, and climate change

    International Nuclear Information System (INIS)

    Aubrey, D.G.

    1993-01-01

    The demographic flight to the coast, begun in early civilization, continues unabated worldwide according to latest studies. The percentage of population living on the coast is expected to remain relatively constant over the next few decades, but the total numbers will increase as the population increases. Recent coastal battering by hurricanes and extratropical storms poses questions about coastal habitability and the real economics of coastal development. Repair costs are borne by private individuals as well as the public in various direct and indirect ways. As these costs escalate, it is fitting to ask what the future portends for storm and coastal-flood damage. It is known that development pressures will continue to increase along the coast, but what will happen concurrently to natural-hazard threats to this infrastructure? Though much emphasis has been placed on sea-level rise, the broader issue is climate change in general. Here, the author considers climate change in both its natural and anthropogenic perspectives. Without becoming mired in the debate about the greenhouse effect and human influence on climatic shifts, some of the broad classes of natural hazards that might accompany climate change are examined. There are several categories of possible global-change effects on coastal erosion. In the early 1980's, an Environmental Protection Agency (EPA) report postulated increases in global sea level up to 4 meters during the next 100 years. Though balanced somewhat by other, lower estimates of sea-level rise, this higher extreme grabbed public attention. During the next decade, scientists attempted to concur on a more reasonable estimate of global sea-level rise due to climate change. Recent credible estimates suggest that approximately 10 to 20 percent of EPA's earlier maximum estimate is most reasonable

  5. Estimation of un-used land potential for biofuels development in China

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Yishui [Chinese Academy of Agricultural Engineering, Beijing 100026 (China); Maelardalen University, Vaesteraas SE-721 23 (Sweden); Zhao, Lixin; Meng, Haibo; Sun, Liying [Chinese Academy of Agricultural Engineering, Beijing 100026 (China); Yan, Jinyue [Maelardalen University, Vaesteraas SE-721 23 (Sweden); Royal Institute of Technology, SE-100 44 Stockholm (Sweden)

    2009-11-15

    This paper presents the current status of biofuel development and estimates the potential of un-used land for biofuel development. The potential of crops including cassava, sweet potato, sweet sorghum, sugarcane, sugar beet and Jerusalem artichoke were assessed and discussed for different regions considering the geographical conditions and features of agricultural production. If reserved land resources are explored together with substitute planting implemented and unit area yield improved, potential production of ethanol fuel will be 22 million ton in 2020. The study also recommends the use of winter idle lands for rapeseed plantation for biofuel production. The potential for production of biodiesel by rapeseed and cottonseed can reach to 3.59 million ton. (author)

  6. High-performance control of a three-phase voltage-source converter including feedforward compensation of the estimated load current

    International Nuclear Information System (INIS)

    Leon, Andres E.; Solsona, Jorge A.; Busada, Claudio; Chiacchiarini, Hector; Valla, Maria Ines

    2009-01-01

    In this paper a new control strategy for voltage-source converters (VSC) is introduced. The proposed strategy consists of a nonlinear feedback controller based on feedback linearization plus a feedforward compensation of the estimated load current. In our proposal an energy function and the direct-axis current are considered as outputs, in order to avoid the internal dynamics. In this way, a full linearization is obtained via nonlinear transformation and feedback. An estimate of the load current is feedforwarded to improve the performance of the whole system and to diminish the capacitor size. This estimation allows to obtain a more rugged and cheaper implementation. The estimate is calculated by using a nonlinear reduced-order observer. The proposal is validated through different tests. These tests include performance in presence of switching frequency, measurement filters delays, parameters uncertainties and disturbances in the input voltage.

  7. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    Science.gov (United States)

    Scott, Elaine P.

    1994-01-01

    was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.

  8. Handbook for cost estimating. A method for developing estimates of costs for generic actions for nuclear power plants

    International Nuclear Information System (INIS)

    Ball, J.R.; Cohen, S.; Ziegler, E.Z.

    1984-10-01

    This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook

  9. Global warming potential estimates for the C1-C3 hydrochlorofluorocarbons (HCFCs) included in the Kigali Amendment to the Montreal Protocol

    Science.gov (United States)

    Papanastasiou, Dimitrios K.; Beltrone, Allison; Marshall, Paul; Burkholder, James B.

    2018-05-01

    Hydrochlorofluorocarbons (HCFCs) are ozone depleting substances and potent greenhouse gases that are controlled under the Montreal Protocol. However, the majority of the 274 HCFCs included in Annex C of the protocol do not have reported global warming potentials (GWPs) which are used to guide the phaseout of HCFCs and the future phase down of hydrofluorocarbons (HFCs). In this study, GWPs for all C1-C3 HCFCs included in Annex C are reported based on estimated atmospheric lifetimes and theoretical methods used to calculate infrared absorption spectra. Atmospheric lifetimes were estimated from a structure activity relationship (SAR) for OH radical reactivity and estimated O(1D) reactivity and UV photolysis loss processes. The C1-C3 HCFCs display a wide range of lifetimes (0.3 to 62 years) and GWPs (5 to 5330, 100-year time horizon) dependent on their molecular structure and the H-atom content of the individual HCFC. The results from this study provide estimated policy-relevant GWP metrics for the HCFCs included in the Montreal Protocol in the absence of experimentally derived metrics.

  10. A statistical model for estimation of fish density including correlation in size, space, time and between species from research survey data.

    Directory of Open Access Journals (Sweden)

    J Rasmus Nielsen

    Full Text Available Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua and whiting (Merlangius merlangus, a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size. The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year.

  11. Analysis of advanced European nuclear fuel cycle scenarios including transmutation and economical estimates

    International Nuclear Information System (INIS)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-01-01

    Four European fuel cycle scenarios involving transmutation options have been addressed from a point of view of resources utilization and economics. Scenarios include the current fleet using Light Water Reactor (LWR) technology and open fuel cycle (as a reference scenario), a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel and two fuel cycles with Minor Actinide (MA) transmutation in a fraction of the FR fleet or in dedicated Accelerator Driven Systems (ADS).Results reveal that all scenarios are feasible according to nuclear resources demand. Regarding the economic analysis, the estimations show an increase of LCOE - averaged over the whole period - with respect to the reference scenario of 20% for Pu management scenario and around 35% for both transmutation scenarios respectively.

  12. Analysis of advanced European nuclear fuel cycle scenarios including transmutation and economical estimates

    Energy Technology Data Exchange (ETDEWEB)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-07-01

    Four European fuel cycle scenarios involving transmutation options have been addressed from a point of view of resources utilization and economics. Scenarios include the current fleet using Light Water Reactor (LWR) technology and open fuel cycle (as a reference scenario), a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel and two fuel cycles with Minor Actinide (MA) transmutation in a fraction of the FR fleet or in dedicated Accelerator Driven Systems (ADS).Results reveal that all scenarios are feasible according to nuclear resources demand. Regarding the economic analysis, the estimations show an increase of LCOE - averaged over the whole period - with respect to the reference scenario of 20% for Pu management scenario and around 35% for both transmutation scenarios respectively.

  13. INCLUDING INTANGIBLE ASSETS IN RATES TO ESTIMATE THE RISK OF BANKRUPTCY

    Directory of Open Access Journals (Sweden)

    Eugenia IANCU

    2017-12-01

    Full Text Available The purpose of this paper is to show that an economic entity’s intangible assets play an important role in predicting the risk of bankruptcy of the company and at the same time in its evolution. Based on benchmarking and on appeal to the experience and intuition of available human expert it can be shaped a credible model and, based on this model can be projected the future course of a business organization. Among other issues, we note that the intangible assets of a company can and should be entered into the equation for estimating the risk of bankruptcy whether it avails or not to artificial intelligence (AI techniques to solve this problem (values lead to bankruptcy and the graphics functions differ majorly when the analysis includes the Rhine rate which takes into account intangibles of firms. From the structure of the paper we can see that whatever the type of model used in predicting the risk of bankruptcy at either classic or using artificial intelligence techniques (AI a leading role in the evolution and the value of the company represents intangible.

  14. Development of regional stump-to-mill logging cost estimators

    Science.gov (United States)

    Chris B. LeDoux; John E. Baumgras

    1989-01-01

    Planning logging operations requires estimating the logging costs for the sale or tract being harvested. Decisions need to be made on equipment selection and its application to terrain. In this paper a methodology is described that has been developed and implemented to solve the problem of accurately estimating logging costs by region. The methodology blends field time...

  15. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  16. A statistical model for estimation of fish density including correlation in size, space, time and between species from research survey data

    DEFF Research Database (Denmark)

    Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter

    2014-01-01

    Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...

  17. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    Science.gov (United States)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  18. Dose estimate for personal music players including earphone sensitivity and characteristic

    DEFF Research Database (Denmark)

    Hammershøi, Dorte; Ordoñez Pizarro, Rodrigo Eduardo; Christensen, Anders Tornvig

    2016-01-01

    Personal music players can expose their listeners to high sound pressure levels over prolonged periods of time. The risk associated with prolonged listening is not readily available to the listener, and efforts are made to standardize dose estimates that may be displayed for the user. In the pres......Personal music players can expose their listeners to high sound pressure levels over prolonged periods of time. The risk associated with prolonged listening is not readily available to the listener, and efforts are made to standardize dose estimates that may be displayed for the user...... earphone measurements published in the past. The work is on-going....

  19. Including pathogen risk in life cycle assessment of wastewater management. 1. Estimating the burden of disease associated with pathogens.

    Science.gov (United States)

    Harder, Robin; Heimersson, Sara; Svanström, Magdalena; Peters, Gregory M

    2014-08-19

    The environmental performance of wastewater and sewage sludge management is commonly assessed using life cycle assessment (LCA), whereas pathogen risk is evaluated with quantitative microbial risk assessment (QMRA). This study explored the application of QMRA methodology with intent to include pathogen risk in LCA and facilitate a comparison with other potential impacts on human health considered in LCA. Pathogen risk was estimated for a model wastewater treatment system (WWTS) located in an industrialized country and consisting of primary, secondary, and tertiary wastewater treatment, anaerobic sludge digestion, and land application of sewage sludge. The estimation was based on eight previous QMRA studies as well as parameter values taken from the literature. A total pathogen risk (expressed as burden of disease) on the order of 0.2-9 disability-adjusted life years (DALY) per year of operation was estimated for the model WWTS serving 28,600 persons and for the pathogens and exposure pathways included in this study. The comparison of pathogen risk with other potential impacts on human health considered in LCA is detailed in part 2 of this article series.

  20. Genetic parameter estimates for carcass traits and visual scores including or not genomic information.

    Science.gov (United States)

    Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G

    2016-05-01

    The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of

  1. Development and Cross-Validation of Equation for Estimating Percent Body Fat of Korean Adults According to Body Mass Index

    Directory of Open Access Journals (Sweden)

    Hoyong Sung

    2017-06-01

    Full Text Available Background : Using BMI as an independent variable is the easiest way to estimate percent body fat. Thus far, few studies have investigated the development and cross-validation of an equation for estimating the percent body fat of Korean adults according to the BMI. The goals of this study were the development and cross-validation of an equation for estimating the percent fat of representative Korean adults using the BMI. Methods : Samples were obtained from the Korea National Health and Nutrition Examination Survey between 2008 and 2011. The samples from 2008-2009 and 2010-2011 were labeled as the validation group (n=10,624 and the cross-validation group (n=8,291, respectively. The percent fat was measured using dual-energy X-ray absorptiometry, and the body mass index, gender, and age were included as independent variables to estimate the measured percent fat. The coefficient of determination (R², standard error of estimation (SEE, and total error (TE were calculated to examine the accuracy of the developed equation. Results : The cross-validated R² was 0.731 for Model 1 and 0.735 for Model 2. The SEE was 3.978 for Model 1 and 3.951 for Model 2. The equations developed in this study are more accurate for estimating percent fat of the cross-validation group than those previously published by other researchers. Conclusion : The newly developed equations are comparatively accurate for the estimation of the percent fat of Korean adults.

  2. Age estimation in the living

    DEFF Research Database (Denmark)

    Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels

    2015-01-01

    A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...

  3. Development and prospective validation of a model estimating risk of readmission in cancer patients.

    Science.gov (United States)

    Schmidt, Carl R; Hefner, Jennifer; McAlearney, Ann S; Graham, Lisa; Johnson, Kristen; Moffatt-Bruce, Susan; Huerta, Timothy; Pawlik, Timothy M; White, Susan

    2018-02-26

    Hospital readmissions among cancer patients are common. While several models estimating readmission risk exist, models specific for cancer patients are lacking. A logistic regression model estimating risk of unplanned 30-day readmission was developed using inpatient admission data from a 2-year period (n = 18 782) at a tertiary cancer hospital. Readmission risk estimates derived from the model were then calculated prospectively over a 10-month period (n = 8616 admissions) and compared with actual incidence of readmission. There were 2478 (13.2%) unplanned readmissions. Model factors associated with readmission included: emergency department visit within 30 days, >1 admission within 60 days, non-surgical admission, solid malignancy, gastrointestinal cancer, emergency admission, length of stay >5 days, abnormal sodium, hemoglobin, or white blood cell count. The c-statistic for the model was 0.70. During the 10-month prospective evaluation, estimates of readmission from the model were associated with higher actual readmission incidence from 20.7% for the highest risk category to 9.6% for the lowest. An unplanned readmission risk model developed specifically for cancer patients performs well when validated prospectively. The specificity of the model for cancer patients, EMR incorporation, and prospective validation justify use of the model in future studies designed to reduce and prevent readmissions. © 2018 Wiley Periodicals, Inc.

  4. Estimation of paddy water temperature during crop development

    International Nuclear Information System (INIS)

    Centeno, H.G.S.; Horie, T.

    1996-01-01

    The crop meristem is in direct contact with paddy water during crop's vegetative stage. Ambient air temperature becomes an important factor in crop development only when internodes elongate sufficiently for the meristem to rise above the water surface. This does not occur until after panicle initiation. Crop growth at vegetative stage is affected more by water temperature than the most commonly measured air temperature. During transplanting in 1992 dry season, the maximum paddy water temperature was 10 deg C higher than the maximum air temperature. For rice crop models, the development of a submodel to estimate water temperature is important to account the effect of paddy water temperature on plant growth. Paddy water temperature is estimated from mean air temperature, solar radiation, and crop canopy. The parameters of the model were derived using the simplex method on data from the 1993 wet- and dry-season field experiments at IRRI

  5. Estimated Risk of Developing Selected DSM-IV Disorders among 5-Year-Old Children with Prenatal Cocaine Exposure

    Science.gov (United States)

    Morrow, Connie E.; Accornero, Veronica H.; Xue, Lihua; Manjunath, Sudha; Culbertson, Jan L.; Anthony, James C.; Bandstra, Emmalee S.

    2009-01-01

    We estimated childhood risk of developing selected DSM-IV Disorders, including Attention-Deficit Hyperactivity Disorder (ADHD), Oppositional Defiant Disorder (ODD), and Separation Anxiety Disorder (SAD), in children with prenatal cocaine exposure (PCE). Children were enrolled prospectively at birth (n = 476) with prenatal drug exposures documented…

  6. Estimation of economic parameters of U.S. hydropower resources

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Douglas G. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Hunt, Richard T. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Reeves, Kelly S. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL); Carroll, Greg R. [Idaho National Lab. (INL), Idaho Falls, ID (United States). Idaho National Engineering and Environmental Lab. (INEEL)

    2003-06-01

    Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”

  7. Loss of life estimation-Review, developments and challenges

    NARCIS (Netherlands)

    Jonkman, S.N.; Maaskant, B.; Kolen, B.; Needham, J. T Jason

    2016-01-01

    This paper presents an overview and review of methods developed for loss of life estimation in flood risk assessment. These methods range from empirical to simulation based approaches that are used to support flood risk analyses and emergency management. Similarities and differences between the

  8. Development of radioactivity estimation system considering radioactive nuclide movement

    International Nuclear Information System (INIS)

    Fukumura, Nobuo; Miyamoto, Yoshiaki

    2010-01-01

    A radioactivity estimation system considering radioactive nuclide movement is developed to integrate the established codes and the code system for decommissioning of sodium cooled fast reactor (FBR). The former are the codes for estimation of radioactivity movement in sodium coolant of fast reactor which are named SAFFIRE, PSYCHE and TTT. The latter code system is to estimate neutron irradiation activity (COSMARD-RRADO). It is paid special attention to keep the consistency of input data used among these codes and also the simplification of their interface. A new function is added to the estimation system, to estimate minor FP inventory caused by the fission of impurities contained in the coolant and slight fuel material attached on the fuel cladding. To check the evaluation system, the system is applied with radioactivity data of the preceding FBR such as BN-350, JOYO and Monju. Agreement between the analysis results and the measurement is well satisfactory. The uncertainty of the code system is within several tens per cent for the activation of primary coolant (Na-22) and factor of 2-4 for the estimation of radioactivity inventory in sodium coolant. (author)

  9. Cost Estimation of Software Development and the Implications for the Program Manager

    Science.gov (United States)

    1992-06-01

    Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome

  10. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  11. Development of paint area estimation software for ship compartments and structures

    Directory of Open Access Journals (Sweden)

    Doo-Yeoun Cho

    2016-03-01

    Full Text Available The painting process of large ships is an intense manual operation that typically comprises 9–12% of the total shipbuilding cost. Accordingly, shipbuilders need to estimate the required amount of anti-corrosive coatings and painting resources for inventory and cost control. This study aims to develop a software system which enables the shipbuilders to estimate paint area using existing 3D CAD ship structural models. The geometric information of the ships structure are extracted from the existing shipbuilding CAD/CAM system and used to create painting zones. After specifying the painting zones, users can generate the paint faces by clipping structural parts inside each zone. Finally, the paint resources may be obtained from the product of the paint areas and required paint thickness. Implementing the developed software system to real shipbuilders' operations has contributed to improved productivity, faster resource estimation, better accuracy, and fewer coating defects over their conventional manual calculation methods for painting resource estimation.

  12. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    Science.gov (United States)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  13. Analysis of advanced european nuclear fuel cycle scenarios including transmutation and economical estimates

    International Nuclear Information System (INIS)

    Merino Rodriguez, I.; Alvarez-Velarde, F.; Martin-Fuertes, F.

    2013-01-01

    In this work the transition from the existing Light Water Reactors (LWR) to the advanced reactors is analyzed, including Generation III+ reactors in a European framework. Four European fuel cycle scenarios involving transmutation options have been addressed. The first scenario (i.e., reference) is the current fleet using LWR technology and open fuel cycle. The second scenario assumes a full replacement of the initial fleet with Fast Reactors (FR) burning U-Pu MOX fuel. The third scenario is a modification of the second one introducing Minor Actinide (MA) transmutation in a fraction of the FR fleet. Finally, in the fourth scenario, the LWR fleet is replaced using FR with MOX fuel as well as Accelerator Driven Systems (ADS) for MA transmutation. All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for a period of 200 years looking for equilibrium mass flows. The simulations were made using the TR-EVOL code, a tool for fuel cycle studies developed by CIEMAT. The results reveal that all scenarios are feasible according to nuclear resources demand (U and Pu). Concerning to no transmutation cases, the second scenario reduces considerably the Pu inventory in repositories compared to the reference scenario, although the MA inventory increases. The transmutation scenarios show that elimination of the LWR MA legacy requires on one hand a maximum of 33% fraction (i.e., a peak value of 26 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation). On the other hand a maximum number of ADS plants accounting for 5% of electricity generation are predicted in the fourth scenario (i.e., 35 ADS units). Regarding the economic analysis, the estimations show an increase of LCOE (Levelized cost of electricity) - averaged over the whole period - with respect to the reference scenario of 21% and 29% for FR and FR with transmutation scenarios respectively, and 34% for the fourth scenario. (authors)

  14. Development of flood regressions and climate change scenarios to explore estimates of future peak flows

    Science.gov (United States)

    Burns, Douglas A.; Smith, Martyn J.; Freehafer, Douglas A.

    2015-12-31

    A new Web-based application, titled “Application of Flood Regressions and Climate Change Scenarios To Explore Estimates of Future Peak Flows”, has been developed by the U.S. Geological Survey, in cooperation with the New York State Department of Transportation, that allows a user to apply a set of regression equations to estimate the magnitude of future floods for any stream or river in New York State (exclusive of Long Island) and the Lake Champlain Basin in Vermont. The regression equations that are the basis of the current application were developed in previous investigations by the U.S. Geological Survey (USGS) and are described at the USGS StreamStats Web sites for New York (http://water.usgs.gov/osw/streamstats/new_york.html) and Vermont (http://water.usgs.gov/osw/streamstats/Vermont.html). These regression equations include several fixed landscape metrics that quantify aspects of watershed geomorphology, basin size, and land cover as well as a climate variable—either annual precipitation or annual runoff.

  15. Development of a simple estimation tool for LMFBR construction cost

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Kinoshita, Izumi

    1999-01-01

    A simple tool for estimating the construction costs of liquid-metal-cooled fast breeder reactors (LMFBRs), 'Simple Cost' was developed in this study. Simple Cost is based on a new estimation formula that can reduce the amount of design data required to estimate construction costs. Consequently, Simple cost can be used to estimate the construction costs of innovative LMFBR concepts for which detailed design has not been carried out. The results of test calculation show that Simple Cost provides cost estimations equivalent to those obtained with conventional methods within the range of plant power from 325 to 1500 MWe. Sensitivity analyses for typical design parameters were conducted using Simple Cost. The effects of four major parameters - reactor vessel diameter, core outlet temperature, sodium handling area and number of secondary loops - on the construction costs of LMFBRs were evaluated quantitatively. The results show that the reduction of sodium handling area is particularly effective in reducing construction costs. (author)

  16. HIV Model Parameter Estimates from Interruption Trial Data including Drug Efficacy and Reservoir Dynamics

    Science.gov (United States)

    Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan

    2012-01-01

    Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727

  17. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  18. Development and application of best-estimate LWR safety analysis codes

    International Nuclear Information System (INIS)

    Reocreux, M.

    1997-01-01

    This paper is a review of the status and the future orientations of the development and application of best estimate LWR safety analysis codes. The present status of these codes exhibits a large success and almost a complete fulfillment of the objectives which were assigned in the 70s. The applications of Best Estimate codes are numerous and cover a large variety of safety questions. However these applications raised a number of problems. The first ones concern the need to have a better control of the quality of the results. This means requirements on code assessment and on uncertainties evaluation. The second ones concern needs for code development and specifically regarding physical models, numerics, coupling with other codes and programming. The analysis of the orientations for code developments and applications in the next years, shows that some developments should be made without delay in order to solve today questions whereas some others are more long term and should be tested for example in some pilot programmes before being eventually applied in main code development. Each of these development programmes are analyzed in the paper by detailing their main content and their possible interest. (author)

  19. Analysis of advanced European nuclear fuel cycle scenarios including transmutation and economic estimates

    International Nuclear Information System (INIS)

    Rodríguez, Iván Merino; Álvarez-Velarde, Francisco; Martín-Fuertes, Francisco

    2014-01-01

    Highlights: • Four fuel cycle scenarios have been analyzed in resources and economic terms. • Scenarios involve Once-Through, Pu burning, and MA transmutation strategies. • No restrictions were found in terms of uranium and plutonium availability. • The best case cost and the impact of their uncertainties to the LCOE were analyzed. - Abstract: Four European fuel cycle scenarios involving transmutation options (in coherence with PATEROS and CP-ESFR EU projects) have been addressed from a point of view of resources utilization and economic estimates. Scenarios include: (i) the current fleet using Light Water Reactor (LWR) technology and open fuel cycle, (ii) full replacement of the initial fleet with Fast Reactors (FR) burning U–Pu MOX fuel, (iii) closed fuel cycle with Minor Actinide (MA) transmutation in a fraction of the FR fleet, and (iv) closed fuel cycle with MA transmutation in dedicated Accelerator Driven Systems (ADS). All scenarios consider an intermediate period of GEN-III+ LWR deployment and they extend for 200 years, looking for long term equilibrium mass flow achievement. The simulations were made using the TR E VOL code, capable to assess the management of the nuclear mass streams in the scenario as well as economics for the estimation of the levelized cost of electricity (LCOE) and other costs. Results reveal that all scenarios are feasible according to nuclear resources demand (natural and depleted U, and Pu). Additionally, we have found as expected that the FR scenario reduces considerably the Pu inventory in repositories compared to the reference scenario. The elimination of the LWR MA legacy requires a maximum of 55% fraction (i.e., a peak value of 44 FR units) of the FR fleet dedicated to transmutation (MA in MOX fuel, homogeneous transmutation) or an average of 28 units of ADS plants (i.e., a peak value of 51 ADS units). Regarding the economic analysis, the main usefulness of the provided economic results is for relative comparison of

  20. Development of a biometric method to estimate age on hand radiographs.

    Science.gov (United States)

    Remy, Floriane; Hossu, Gabriela; Cendre, Romain; Micard, Emilien; Mainard-Simard, Laurence; Felblinger, Jacques; Martrille, Laurent; Lalys, Loïc

    2017-02-01

    Age estimation of living individuals aged less than 13, 18 or 21 years, which are some relevant legal ages in most European countries, is currently problematic in the forensic context. Thus, numerous methods are available for legal authorities, although their efficiency can be discussed. For those reasons, we aimed to propose a new method, based on the biometric analysis of hand bones. 451 hand radiographs of French individuals under the age of 21 were retrospectively analyzed. This total sample was divided into three subgroups bounded by the relevant legal ages previously mentioned: 0-13, 13-18 and 18-21 years. On these radiographs, we numerically applied the osteometric board method used in anthropology, by including each metacarpal and proximal phalange of the five hand rays in the smallest rectangle possible. In that we can access their length and width information thanks to a measurement protocol developed precisely for our treatment with the ORS Visual ® software. Then, a statistical analysis was performed from these biometric data: a Linear Discriminant Analysis (LDA) evaluated the probability for an individual to belong to one of the age group (0-13, 13-18 or 18-21); and several multivariate regression models were tested for the establishment of age estimation formulas for each of these age groups. The mean Correlation Coefficient between chronological age and both lengths and widths of hand bones is equal to 0.90 for the total sample. Repeatability and reproducibility were assessed. The LDA could more easily predict the belonging to the 0-13 age group. Age can be estimated with a mean standard error which never exceeds 1 year for the 95% confidence interval. Finally, compared to the literature, we can conclude that estimating an age from the biometric information of metacarpals and proximal phalanges is promising. Copyright © 2016. Published by Elsevier B.V.

  1. WILDFIRE IGNITION RESISTANCE ESTIMATOR WIZARD SOFTWARE DEVELOPMENT REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, M.; Robinson, C.; Gupta, N.; Werth, D.

    2012-10-10

    This report describes the development of a software tool, entitled “WildFire Ignition Resistance Estimator Wizard” (WildFIRE Wizard, Version 2.10). This software was developed within the Wildfire Ignition Resistant Home Design (WIRHD) program, sponsored by the U. S. Department of Homeland Security, Science and Technology Directorate, Infrastructure Protection & Disaster Management Division. WildFIRE Wizard is a tool that enables homeowners to take preventive actions that will reduce their home’s vulnerability to wildfire ignition sources (i.e., embers, radiant heat, and direct flame impingement) well in advance of a wildfire event. This report describes the development of the software, its operation, its technical basis and calculations, and steps taken to verify its performance.

  2. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  3. Oropharyngeal Dysphagia in Dermatomyositis: Associations with Clinical and Laboratory Features Including Autoantibodies

    OpenAIRE

    Mugii, Naoki; Hasegawa, Minoru; Matsushita, Takashi; Hamaguchi, Yasuhito; Oohata, Sacihe; Okita, Hirokazu; Yahata, Tetsutarou; Someya, Fujiko; Inoue, Katsumi; Murono, Shigeyuki; Fujimoto, Manabu; Takehara, Kazuhiko

    2016-01-01

    Objective Dysphagia develops with low frequency in patients with dermatomyositis. Our objective was to determine the clinical and laboratory features that can estimate the development of dysphagia in dermatomyositis. Methods This study included 92 Japanese patients with adult-onset dermatomyositis. The associations between dysphagia and clinical and laboratory features including disease-specific autoantibodies determined by immunoprecipitation assays were analyzed. Results Videofluoroscopy sw...

  4. A Reformed CDM - including new mechanisms for sustainable development

    Energy Technology Data Exchange (ETDEWEB)

    Holm Olsen, K; Fenhann, J

    2009-07-01

    The annual CD4CDM Perspectives Series features a topic of pivotal importance to the global carbon market. The series seeks to communicate the diverse insights and visions of leading actors in the carbon market to better inform the decisions of professionals and policymakers in developing countries. The second theme of the series focuses on how the CDM can be reformed in a post-2012 climate regime, including new mechanism for sustainable development. Seventeen contributors from the private sector, Designated National Authorities, the Executive Board, research, and development agencies present their perspective on meeting challenges such as the unequal regional distribution of CDM projects, concerns about environmental integrity and technology transfer, complex governance procedures, and questions about the CDM's contribution to sustainable development. The new ideas and solutions to these challenges proposed by the authors in this edition of Perspectives have been solicited to help professionals and policy makers make the best decisions in the lead-up to COP 15 in Copenhagen and beyond. (au)

  5. A Reformed CDM - including new mechanisms for sustainable development

    Energy Technology Data Exchange (ETDEWEB)

    Holm Olsen, K.; Fenhann, J.

    2009-07-01

    The annual CD4CDM Perspectives Series features a topic of pivotal importance to the global carbon market. The series seeks to communicate the diverse insights and visions of leading actors in the carbon market to better inform the decisions of professionals and policymakers in developing countries. The second theme of the series focuses on how the CDM can be reformed in a post-2012 climate regime, including new mechanism for sustainable development. Seventeen contributors from the private sector, Designated National Authorities, the Executive Board, research, and development agencies present their perspective on meeting challenges such as the unequal regional distribution of CDM projects, concerns about environmental integrity and technology transfer, complex governance procedures, and questions about the CDM's contribution to sustainable development. The new ideas and solutions to these challenges proposed by the authors in this edition of Perspectives have been solicited to help professionals and policy makers make the best decisions in the lead-up to COP 15 in Copenhagen and beyond. (au)

  6. Development of cancer risk estimates from epidemiologic studies

    International Nuclear Information System (INIS)

    Webster, E.W.

    1983-01-01

    Radiation risk estimates may be made for an increase in mortality from, or for an increase in incidence of, particular types of disease. For both endpoints, two numerical systems of risk expression are used: the absolute risk system (usually the excess deaths or cases per million persons per year per rad), and the relative risk system (usually excess deaths or cases per year per rad expressed as a percentage of those normally expected). Risks may be calculated for specific age groups or for a general population. An alternative in both risk systems is the estimation of cumulative of lifetime risk rather than annual risk (e.g. in excess deaths per million per rad over a specified long period including the remainder of lifespan). The derivation of both absolute and relative risks is illustrated by examples. The effects on risk estimates of latent period, follow-up time, age at exposure and age standardization within dose groups are illustrated. The dependence of the projected cumulative (lifetime) risk on the adoption of a constant absolute risk or constant relative risk is noted. The use of life-table data in the adjustment of cumulative risk for normal mortality following single or annual doses is briefly discussed

  7. A Development of Domestic Food Chain Model Data for Chronic Effect Estimation of Off-site Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis.

  8. A Development of Domestic Food Chain Model Data for Chronic Effect Estimation of Off-site Consequence Analysis

    International Nuclear Information System (INIS)

    Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol

    2015-01-01

    The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis

  9. Development and Statistical Validation of Spectrophotometric Methods for the Estimation of Nabumetone in Tablet Dosage Form

    Directory of Open Access Journals (Sweden)

    A. R. Rote

    2010-01-01

    Full Text Available Three new simple, economic spectrophotometric methods were developed and validated for the estimation of nabumetone in bulk and tablet dosage form. First method includes determination of nabumetone at absorption maxima 330 nm, second method applied was area under curve for analysis of nabumetone in the wavelength range of 326-334 nm and third method was First order derivative spectra with scaling factor 4. Beer law obeyed in the concentration range of 10-30 μg/mL for all three methods. The correlation coefficients were found to be 0.9997, 0.9998 and 0.9998 by absorption maxima, area under curve and first order derivative spectra. Results of analysis were validated statistically and by performing recovery studies. The mean percent recoveries were found satisfactory for all three methods. The developed methods were also compared statistically using one way ANOVA. The proposed methods have been successfully applied for the estimation of nabumetone in bulk and pharmaceutical tablet dosage form.

  10. Development of hybrid lifecycle cost estimating tool (HLCET) for manufacturing influenced design tradeoff

    Science.gov (United States)

    Sirirojvisuth, Apinut

    concept, the additional manufacturing knowledge can be used to identify a more accurate lifecycle cost and facilitate higher fidelity tradeoffs during conceptual and preliminary design. Advanced Composite Cost Estimating Model (ACCEM) is employed as a process-based cost component to replace the original TCM result of the composite part production cost. The reason for the replacement is that TCM estimates production costs from part weights as a result of subtractive manufacturing of metallic origin such as casting, forging, and machining processes. A complexity factor can sometimes be adjusted to reflect different types of metal and machine settings. The TCM assumption, however, gives erroneous results when applied to additive processes like those of composite manufacturing. Another innovative aspect of this research is the introduction of a work measurement technique called Maynard Operation Sequence Technique (MOST) to be used, similarly to Activity-Based Costing (ABC) approach, to estimate manufacturing time of a part by virtue of breaking down the operations occurred during its production. ABC allows a realistic determination of cost incurred in each activity, as opposed to using a traditional method of time estimation by analogy or using response surface equations from historical process data. The MOST concept provides a tailored study of an individual process typically required for a new, innovative design. Nevertheless, the MOST idea has some challenges, one of which is its requirement to build a new process from ground up. The process development requires a Subject Matter Expertise (SME) in manufacturing method of the particular design. The SME must have also a comprehensive understanding of the MOST system so that the correct parameters are chosen. In practice, these knowledge requirements may demand people from outside of the design discipline and a priori training of MOST. To relieve the constraint, this study includes an entirely new sub-system architecture

  11. Areal rainfall estimation using moving cars - computer experiments including hydrological modeling

    OpenAIRE

    Rabiei, Ehsan; Haberlandt, Uwe; Sester, Monika; Fitzner, Daniel; Wallner, Markus

    2016-01-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have been emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rainfall amounts...

  12. The Development of Multi-Agent System of State Estimation of Electric Power Systems Using Event Models

    Directory of Open Access Journals (Sweden)

    L. V. Massel

    2015-01-01

    Full Text Available The work objective is to offer a methodological approach to the development of multiagent systems (MAS in the energy sector. The agent approach is declared as an integral part of the Smart Grid concept (intelligent energy systems, but so far there is really neither methodological development and nor implementation in this area. The problem to estimate the states of electric power systems (EPS is one of the most important in the energy sector. Decentralization of calculations, when estimating the EPS states, allows reducing the host control center load and the amount of data transferred through the network.To achieve this aim were used the theory and methods for estimating the EPS state, artificial intelligence techniques, methods of object design and programming, multi-agent technologies, and algebraic Joiner-net network.The work analyses existing agent-based solutions, reveals their weaknesses, and proposes author's approach to MAS development in the energy sector, which includes 5 steps: 1 description of the future system, 2 construction and description of the agent-based scenarios, 3 MAS architecture development 4 MAS engineering design 5 MAS implementation.A novelty of the proposed approach lies in introduction of the agent interaction scenarios and application of Joiner-networks for scripting of scenarios. Agent-based scenarios allow nonprogrammers-experts to change the programme algorithm. A Joiner-network of the scenario consists of the functioning processes of agents (nodes, and the events that trigger or end the process. Output event of one process can be the input event for another.The operation algorithm of the EPS estimation system is developed. The first step is to provide decomposition of a nodalization diagram into the areas corresponding to the levels of node voltages. Then diagrams resulting from decomposition are distributed between the agents of EPS estimation and calculated in parallel. At the next stage, all diagrams are

  13. Sparse Channel Estimation Including the Impact of the Transceiver Filters with Application to OFDM

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Pedersen, Niels Lovmand; Manchón, Carles Navarro

    2014-01-01

    Traditionally, the dictionary matrices used in sparse wireless channel estimation have been based on the discrete Fourier transform, following the assumption that the channel frequency response (CFR) can be approximated as a linear combination of a small number of multipath components, each one......) and receive (demodulation) filters. Hence, the assumption of the CFR being sparse in the canonical Fourier dictionary may no longer hold. In this work, we derive a signal model and subsequently a novel dictionary matrix for sparse estimation that account for the impact of transceiver filters. Numerical...... results obtained in an OFDM transmission scenario demonstrate the superior accuracy of a sparse estimator that uses our proposed dictionary rather than the classical Fourier dictionary, and its robustness against a mismatch in the assumed transmit filter characteristics....

  14. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    Science.gov (United States)

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  15. High water-stressed population estimated by world water resources assessment including human activities under SRES scenarios

    Science.gov (United States)

    Kiguchi, M.; Shen, Y.; Kanae, S.; Oki, T.

    2009-04-01

    In an argument of the reduction and the adaptation for the climate change, the evaluation of the influence by the climate change is important. When we argue in adaptation plan from a damage scale and balance with the cost, it is particularly important. Parry et al (2001) evaluated the risks in shortage of water, malaria, food, the risk of the coast flood by temperature function and clarified the level of critical climate change. According to their evaluation, the population to be affected by the shortage of water suddenly increases in the range where temperature increases from 1.5 to 2.0 degree in 2080s. They showed how much we need to reduce emissions in order to draw-down significantly the number at risk. This evaluation of critical climate change threats and targets of water shortage did not include the water withdrawal divided by water availability. Shen et al (2008a) estimated the water withdrawal of projection of future world water resources according to socio-economic driving factors predicted for scenarios A1b, A2, B1, and B2 of the Special Report on Emission Scenarios (SRES). However, these results were in function of not temperature but time. The assessment of the highly water-stressed population considered the socioeconomic development is necessary for a function of the temperature. Because of it is easy to understand to need to reduce emission. We present a multi-GCM analysis of the global and regional populations lived in highly water-stressed basin for a function of the temperature using the socioeconomic data and the outputs of GCMs. In scenario A2, the population increases gradually with warming. On the other hand, the future projection population in scenario A1b and B1 increase gradually until the temperature anomaly exceeds around from +1 to +1.5 degree. After that the population is almost constant. From Shen et al (2008b), we evaluated the HWSP and its ratio in the world with temperature function for scenarios A1B, A2, and B1 by the index of W

  16. Global anthropogenic emissions of particulate matter including black carbon

    Science.gov (United States)

    Klimont, Zbigniew; Kupiainen, Kaarle; Heyes, Chris; Purohit, Pallav; Cofala, Janusz; Rafaj, Peter; Borken-Kleefeld, Jens; Schöpp, Wolfgang

    2017-07-01

    This paper presents a comprehensive assessment of historical (1990-2010) global anthropogenic particulate matter (PM) emissions including the consistent and harmonized calculation of mass-based size distribution (PM1, PM2. 5, PM10), as well as primary carbonaceous aerosols including black carbon (BC) and organic carbon (OC). The estimates were developed with the integrated assessment model GAINS, where source- and region-specific technology characteristics are explicitly included. This assessment includes a number of previously unaccounted or often misallocated emission sources, i.e. kerosene lamps, gas flaring, diesel generators, refuse burning; some of them were reported in the past for selected regions or in the context of a particular pollutant or sector but not included as part of a total estimate. Spatially, emissions were calculated for 172 source regions (as well as international shipping), presented for 25 global regions, and allocated to 0.5° × 0.5° longitude-latitude grids. No independent estimates of emissions from forest fires and savannah burning are provided and neither windblown dust nor unpaved roads emissions are included. We estimate that global emissions of PM have not changed significantly between 1990 and 2010, showing a strong decoupling from the global increase in energy consumption and, consequently, CO2 emissions, but there are significantly different regional trends, with a particularly strong increase in East Asia and Africa and a strong decline in Europe, North America, and the Pacific region. This in turn resulted in important changes in the spatial pattern of PM burden, e.g. European, North American, and Pacific contributions to global emissions dropped from nearly 30 % in 1990 to well below 15 % in 2010, while Asia's contribution grew from just over 50 % to nearly two-thirds of the global total in 2010. For all PM species considered, Asian sources represented over 60 % of the global anthropogenic total, and residential combustion

  17. Global anthropogenic emissions of particulate matter including black carbon

    Directory of Open Access Journals (Sweden)

    Z. Klimont

    2017-07-01

    Full Text Available This paper presents a comprehensive assessment of historical (1990–2010 global anthropogenic particulate matter (PM emissions including the consistent and harmonized calculation of mass-based size distribution (PM1, PM2. 5, PM10, as well as primary carbonaceous aerosols including black carbon (BC and organic carbon (OC. The estimates were developed with the integrated assessment model GAINS, where source- and region-specific technology characteristics are explicitly included. This assessment includes a number of previously unaccounted or often misallocated emission sources, i.e. kerosene lamps, gas flaring, diesel generators, refuse burning; some of them were reported in the past for selected regions or in the context of a particular pollutant or sector but not included as part of a total estimate. Spatially, emissions were calculated for 172 source regions (as well as international shipping, presented for 25 global regions, and allocated to 0.5°  ×  0.5° longitude–latitude grids. No independent estimates of emissions from forest fires and savannah burning are provided and neither windblown dust nor unpaved roads emissions are included. We estimate that global emissions of PM have not changed significantly between 1990 and 2010, showing a strong decoupling from the global increase in energy consumption and, consequently, CO2 emissions, but there are significantly different regional trends, with a particularly strong increase in East Asia and Africa and a strong decline in Europe, North America, and the Pacific region. This in turn resulted in important changes in the spatial pattern of PM burden, e.g. European, North American, and Pacific contributions to global emissions dropped from nearly 30 % in 1990 to well below 15 % in 2010, while Asia's contribution grew from just over 50 % to nearly two-thirds of the global total in 2010. For all PM species considered, Asian sources represented over 60 % of the global

  18. The Estimation Of The Regions’ Efficiency Of The Russian Federation Including The Intellectual Capital, The Characteristics Of Readiness For Innovation, Level Of Well-Being, And Quality Of Life

    Directory of Open Access Journals (Sweden)

    Valeriy Leonidovich Makarov

    2014-12-01

    Full Text Available On the basis of the authors’ methodology, the models of productive potential of the Russian Federation regions, including estimations of intellectual capital, were constructed. It is shown that characteristics of well-being level and quality of life make a significant impact on the regional production’s efficiency. The characteristics of regions’ readiness to innovate are identified, it is possible to name it as a factor of production’s efficiency. It is shown that the inclusion of different factors of efficiency in the production potential model can significantly increase the differentiation of technical efficiency estimates, besides these estimates and their grades depend on a set of efficiency’s factors. On the basis of a comparison of real GRP and boundary GRP ratings, it is identified locally effective regions with a relatively high estimation of efficiency among regions with similar amounts of GRP and locally ineffective regions. It is calculated marginal effects of influence of the efficiency’s factors on the result of industrial activity in the region. It seems constructively to use these estimates while analyzing the prospects for regions’ development, which is based on the possibility of targeting impact on controllable efficiency’s factors. The article is also offered the option of methodology of the public policy efficiency estimation on the knowledge economy formation — an agent-based model for Russia, which is learning the “knowledge economy” sector and considering their relationship with the rest of the macroeconomic system.

  19. Development of an integrated system for estimating human error probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

  20. Health effects estimation code development for accident consequence analysis

    International Nuclear Information System (INIS)

    Togawa, O.; Homma, T.

    1992-01-01

    As part of a computer code system for nuclear reactor accident consequence analysis, two computer codes have been developed for estimating health effects expected to occur following an accident. Health effects models used in the codes are based on the models of NUREG/CR-4214 and are revised for the Japanese population on the basis of the data from the reassessment of the radiation dosimetry and information derived from epidemiological studies on atomic bomb survivors of Hiroshima and Nagasaki. The health effects models include early and continuing effects, late somatic effects and genetic effects. The values of some model parameters are revised for early mortality. The models are modified for predicting late somatic effects such as leukemia and various kinds of cancers. The models for genetic effects are the same as those of NUREG. In order to test the performance of one of these codes, it is applied to the U.S. and Japanese populations. This paper provides descriptions of health effects models used in the two codes and gives comparisons of the mortality risks from each type of cancer for the two populations. (author)

  1. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  2. Procedure for estimating permanent total enclosure costs

    Energy Technology Data Exchange (ETDEWEB)

    Lukey, M.E.; Prasad, C.; Toothman, D.A.; Kaplan, N.

    1999-07-01

    Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.

  3. Procedure for estimating permanent total enclosure costs

    Energy Technology Data Exchange (ETDEWEB)

    Lukey, M E; Prasad, C; Toothman, D A; Kaplan, N

    1999-07-01

    Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.

  4. Innovative Methods for Estimating Densities and Detection Probabilities of Secretive Reptiles Including Invasive Constrictors and Rare Upland Snakes

    Science.gov (United States)

    2018-01-30

    home range  maintenance  or attraction to or avoidance of  landscape features, including  roads  (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial

  5. EVAPORATION: a new vapour pressure estimation methodfor organic molecules including non-additivity and intramolecular interactions

    Directory of Open Access Journals (Sweden)

    S. Compernolle

    2011-09-01

    Full Text Available We present EVAPORATION (Estimation of VApour Pressure of ORganics, Accounting for Temperature, Intramolecular, and Non-additivity effects, a method to predict (subcooled liquid pure compound vapour pressure p0 of organic molecules that requires only molecular structure as input. The method is applicable to zero-, mono- and polyfunctional molecules. A simple formula to describe log10p0(T is employed, that takes into account both a wide temperature dependence and the non-additivity of functional groups. In order to match the recent data on functionalised diacids an empirical modification to the method was introduced. Contributions due to carbon skeleton, functional groups, and intramolecular interaction between groups are included. Molecules typically originating from oxidation of biogenic molecules are within the scope of this method: aldehydes, ketones, alcohols, ethers, esters, nitrates, acids, peroxides, hydroperoxides, peroxy acyl nitrates and peracids. Therefore the method is especially suited to describe compounds forming secondary organic aerosol (SOA.

  6. Development of a Disaggregation Framework toward the Estimation of Subdaily Reference Evapotranspiration: 2- Estimation of Subdaily Reference Evapotranspiration Using Disaggregated Weather Data

    Directory of Open Access Journals (Sweden)

    F. Parchami Araghi

    2016-09-01

    Full Text Available Introduction: Subdaily estimates of reference evapotranspiration (ET o are needed in many applications such as dynamic agro-hydrological modeling. However, in many regions, the lack of subdaily weather data availability has hampered the efforts to quantify the subdaily ET o. In the first presented paper, a physically based framework was developed to desegregate daily weather data needed for estimation of subdaily reference ET o, including air temperature, wind speed, dew point, actual vapour pressure, relative humidity, and solar radiation. The main purpose of this study was to estimate the subdaily ETo using disaggregated daily data derived from developed disaggregation framework in the first presented paper. Materials and Methods: Subdaily ET o estimates were made, using ASCE and FAO-56 Penman–Monteith models (ASCE-PM and FAO56-PM, respectively and subdaily weather data derived from the developed daily-to-subdaily weather data disaggregation framework. To this end, long-term daily weather data got from Abadan (59 years and Ahvaz (50 years synoptic weather stations were collected. Sensitivity analysis of Penman–Monteith model to the different meteorological variables (including, daily air temperature, wind speed at 2 m height, actual vapor pressure, and solar radiation was carried out, using partial derivatives of Penman–Monteith equation. The capability of the two models for retrieving the daily ETo was evaluated, using root mean square error RMSE (mm, the mean error ME (mm, the mean absolute error ME (mm, Pearson correlation coefficient r (-, and Nash–Sutcliffe model efficiency coefficient EF (-. Different contributions to the overall error were decomposed using a regression-based method. Results and Discussion: The results of the sensitivity analysis showed that the daily air temperature and the actual vapor pressure are the most significant meteorological variables, which affect the ETo estimates. In contrast, low sensitivity

  7. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    Science.gov (United States)

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application

  8. Development of software and modification of Q-FISH protocol for estimation of individual telomere length in immunopathology.

    Science.gov (United States)

    Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A

    2017-04-01

    Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.

  9. MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,

    Science.gov (United States)

    developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on

  10. Development of a practical modeling framework for estimating the impact of wind technology on bird populations

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, M.L. [California State Univ., Sacramento, CA (United States); Pollock, K.H. [North Carolina State Univ., Raleigh, NC (United States)

    1997-11-01

    One of the most pressing environmental concerns related to wind project development is the potential for avian fatalities caused by the turbines. The goal of this project is to develop a useful, practical modeling framework for evaluating potential wind power plant impacts that can be generalized to most bird species. This modeling framework could be used to get a preliminary understanding of the likelihood of significant impacts to birds, in a cost-effective way. The authors accomplish this by (1) reviewing the major factors that can influence the persistence of a wild population; (2) briefly reviewing various models that can aid in estimating population status and trend, including methods of evaluating model structure and performance; (3) reviewing survivorship and population projections; and (4) developing a framework for using models to evaluate the potential impacts of wind development on birds.

  11. Development of a practical modeling framework for estimating the impact of wind technology on bird populations

    International Nuclear Information System (INIS)

    Morrison, M.L.; Pollock, K.H.

    1997-11-01

    One of the most pressing environmental concerns related to wind project development is the potential for avian fatalities caused by the turbines. The goal of this project is to develop a useful, practical modeling framework for evaluating potential wind power plant impacts that can be generalized to most bird species. This modeling framework could be used to get a preliminary understanding of the likelihood of significant impacts to birds, in a cost-effective way. The authors accomplish this by (1) reviewing the major factors that can influence the persistence of a wild population; (2) briefly reviewing various models that can aid in estimating population status and trend, including methods of evaluating model structure and performance; (3) reviewing survivorship and population projections; and (4) developing a framework for using models to evaluate the potential impacts of wind development on birds

  12. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  13. Estimating oxygen needs for childhood pneumonia in developing country health systems: a new model for expecting the unexpected.

    Directory of Open Access Journals (Sweden)

    Beverly D Bradley

    Full Text Available Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or 'demand' for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability.A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons.Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality.A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of

  14. Estimation of baseline lifetime risk of developed cancer related to radiation exposure in China

    International Nuclear Information System (INIS)

    Li Xiaoliang; Niu Haowei; Sun Quanfu; Ma Weidong

    2011-01-01

    Objective: To introduce the general international method for estimation of lifetime risk of developed cancer, and to estimate the lifetime risk baseline values of several kinds of cancers related to radiation exposures in China. Methods: The risk estimation was based on the data from Chinese Cancer Registry Annual Report (2010) and China Population and Employment Statistics Yearbook (2009), and made according to the method previously published by National Cancer Institute (NCI) in USA. Results: The lifetime risk of all cancer in China in 2007 was estimated to be 27.77%, that of lung cancer 5.96%, that of breast cancer for female 3.34%, that of all leukemia 0.14%, that of thyroid cancer 0.37%. The lifetime risks of all cancer were estimated to be 32.74% for males and 24.73% for females, and that was 36.47% for urban residents and 26.79% for rural people. Conclusions: The lifetime risk of all cancer for males in 2007 was about 1.25 times as much as that for females. The value of all cancer for urban residents was about 1.35 times as much as that for rural residents. The lifetime risk of developed cancers in 2007 in China is lower than that in the developed countries,such as Japan. (authors)

  15. Third molar development: evaluation of nine tooth development registration techniques for age estimations.

    Science.gov (United States)

    Thevissen, Patrick W; Fieuws, Steffen; Willems, Guy

    2013-03-01

    Multiple third molar development registration techniques exist. Therefore the aim of this study was to detect which third molar development registration technique was most promising to use as a tool for subadult age estimation. On a collection of 1199 panoramic radiographs the development of all present third molars was registered following nine different registration techniques [Gleiser, Hunt (GH); Haavikko (HV); Demirjian (DM); Raungpaka (RA); Gustafson, Koch (GK); Harris, Nortje (HN); Kullman (KU); Moorrees (MO); Cameriere (CA)]. Regression models with age as response and the third molar registration as predictor were developed for each registration technique separately. The MO technique disclosed highest R(2) (F 51%, M 45%) and lowest root mean squared error (F 3.42 years; M 3.67 years) values, but differences with other techniques were small in magnitude. The amount of stages utilized in the explored staging techniques slightly influenced the age predictions. © 2013 American Academy of Forensic Sciences.

  16. Development of a Job-Exposure Matrix (AsbJEM) to Estimate Occupational Exposure to Asbestos in Australia.

    Science.gov (United States)

    van Oyen, Svein C; Peters, Susan; Alfonso, Helman; Fritschi, Lin; de Klerk, Nicholas H; Reid, Alison; Franklin, Peter; Gordon, Len; Benke, Geza; Musk, Arthur W

    2015-07-01

    Occupational exposure data on asbestos are limited and poorly integrated in Australia so that estimates of disease risk and attribution of disease causation are usually calculated from data that are not specific for local conditions. To develop a job-exposure matrix (AsbJEM) to estimate occupational asbestos exposure levels in Australia, making optimal use of the available exposure data. A dossier of all available exposure data in Australia and information on industry practices and controls was provided to an expert panel consisting of three local industrial hygienists with thorough knowledge of local and international work practices. The expert panel estimated asbestos exposures for combinations of occupation, industry, and time period. Intensity and frequency grades were estimated to enable the calculation of annual exposure levels for each occupation-industry combination for each time period. Two indicators of asbestos exposure intensity (mode and peak) were used to account for different patterns of exposure between occupations. Additionally, the probable type of asbestos fibre was determined for each situation. Asbestos exposures were estimated for 537 combinations of 224 occupations and 60 industries for four time periods (1943-1966; 1967-1986; 1987-2003; ≥2004). Workers in the asbestos manufacturing, shipyard, and insulation industries were estimated to have had the highest average exposures. Up until 1986, 46 occupation-industry combinations were estimated to have had exposures exceeding the current Australian exposure standard of 0.1 f ml(-1). Over 90% of exposed occupations were considered to have had exposure to a mixture of asbestos varieties including crocidolite. The AsbJEM provides empirically based quantified estimates of asbestos exposure levels for Australian jobs since 1943. This exposure assessment application will contribute to improved understanding and prediction of asbestos-related diseases and attribution of disease causation. © The

  17. Development of software for estimating clear sky solar radiation in Indonesia

    Science.gov (United States)

    Ambarita, H.

    2017-01-01

    Research on solar energy applications in Indonesia has come under scrutiny in recent years. Solar radiation is harvested by solar collector or solar cell and convert the energy into useful energy such as heat and or electricity. In order to provide a better configuration of a solar collector or a solar cell, clear sky radiation should be estimated properly. In this study, an in-house software for estimating clear sky radiation is developed. The governing equations are solved simultaneously. The software is tested in Medan city by performing a solar radiation measurements. For clear sky radiation, the results of the software and measurements ones show a good agreement. However, for the cloudy sky condition it cannot predict the solar radiation. This software can be used to estimate the clear sky radiation in Indonesia.

  18. Human age estimation combining third molar and skeletal development.

    Science.gov (United States)

    Thevissen, P W; Kaur, J; Willems, G

    2012-03-01

    The wide prediction intervals obtained with age estimation methods based on third molar development could be reduced by combining these dental observations with age-related skeletal information. Therefore, on cephalometric radiographs, the most accurate age-estimating skeletal variable and related registration method were searched and added to a regression model, with age as response and third molar stages as explanatory variable. In a pilot set up on a dataset of 496 (283 M; 213 F) cephalometric radiographs, the techniques of Baccetti et al. (2005) (BA), Seedat et al. (2005) (SE), Caldas et al. (2007) and Rai et al. (2008) (RA) were verified. In the main study, data from 460 (208 F, 224 M) individuals in an age range between 3 and 26 years, for which at the same day an orthopantogram and a cephalogram were taken, were collected. On the orthopantomograms, the left third molar development was registered using the scoring system described by Gleiser and Hunt (1955) and modified by Köhler (1994) (GH). On the cephalograms, cervical vertebrae development was registered according to the BA and SE techniques. A regression model, with age as response and the GH scores as explanatory variable, was fitted to the data. Next, information of BA, SE and BA + SE was, respectively, added to this model. From all obtained models, the determination coefficients and the root mean squared errors were calculated. Inclusion of information from cephalograms based on the BA, as well as the SE, technique improved the amount of explained variance in age acquired from panoramic radiographs using the GH technique with 48%. Inclusion of cephalometric BA + SE information marginally improved the previous result (+1%). The RMSE decreased with 1.93, 1.85 and 2.03 years by adding, respectively, BA, SE and BA + SE information to the GH model. The SE technique allows clinically the fastest and easiest registration of the degree of development of the cervical vertebrae. Therefore, the choice of

  19. Analysis of safety information for nuclear power plants and development of source term estimation program

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Choi, Seong Soo; Park, Jin Hee

    1999-12-01

    Current CARE(Computerized Advisory System for Radiological Emergency) in KINS(Korea Institute of Nuclear Safety) has no STES(Source Term Estimation System) which links between SIDS(Safety Information Display System) and FADAS(Following Accident Dose Assessment System). So in this study, STES is under development. STES system is the system that estimates the source term based on the safety information provided by SIDS. Estimated source term is given to FADAS as an input for estimation of environmental effect of radiation. Through this first year project STES for the Kori 3,4 and Younggwang 1,2 has been developed. Since there is no CARE for Wolsong(PHWR) plants yet, CARE for Wolsong is under construction. The safety parameters are selected and the safety information display screens and the alarm logic for plant status change are developed for Wolsong Unit 2 based on the design documents for CANDU plants

  20. Areal rainfall estimation using moving cars – computer experiments including hydrological modeling

    OpenAIRE

    E. Rabiei; U. Haberlandt; M. Sester; D. Fitzner; M. Wallner

    2016-01-01

    The need for high temporal and spatial resolution precipitation data for hydrological analyses has been discussed in several studies. Although rain gauges provide valuable information, a very dense rain gauge network is costly. As a result, several new ideas have emerged to help estimating areal rainfall with higher temporal and spatial resolution. Rabiei et al. (2013) observed that moving cars, called RainCars (RCs), can potentially be a new source of data for measuring rai...

  1. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  2. Intrapuparial Development of Sarconesia Chlorogaster (Diptera: Calliphoridae) for Postmortem Interval Estimation (PMI).

    Science.gov (United States)

    Flissak, J C; Moura, M O

    2018-02-28

    Sarconesia chlorogaster (Wiedemann) (Diptera: Calliphoridae) is an endemic blow fly species of forensic importance in South America, and whose duration of pupal development is about 70% of the total immature development time. Therefore, morphological changes during this stage, if refined, may provide greater accuracy and reliability in the calculation of minimum postmortem interval. Considering the importance of this species, the main objective of this work was to identify and describe temporal intrapuparial morphological changes of S. chlorogaster. The development of S. chlorogaster reared on an artificial diet and at two constant temperatures (20 and 25ºC) was monitored. Every 8 h until the end of the pupal stage, 10 pupae were killed, fixed, and had their external morphology described and photographed. Of the 29 morphological characteristics described, 13 are potentially informative for estimating the age of S. chlorogaster. In general, body shape (presence or absence of tagmatization), general coloration, visible presence of the mouth hook (portion of the mandible), thoracic appendages, change in eye color, and bristle formation are the most useful characteristics for determining specific age. The results presented here make it possible to estimate the postmortem interval of a corpse using intrapuparial morphological characters, expanding one's ability to estimate postmortem interval.

  3. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    Science.gov (United States)

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Estimation development cost, study case: Quality Management System Reactor TRIGA Mark III

    International Nuclear Information System (INIS)

    Antúnez Barbosa, Tereso Antonio; Valdovinos Rosas, Rosa María; Marcial Romero, José Raymundo; Ramos Corchado, Marco Antonio; Edgar Herrera Arriaga

    2016-01-01

    The process of estimating costs in software engineering is not a simple task, it must be addressed carefully to obtain an efficient strategy to solve problems associated with the effort, cost and time of activities that are performed in the development of an information system project. In this context the main goal for both developers and customers is the cost, since developers are worry about the effort pay-load and customers are worry about the product pay-load. However, in other fields the cost of goods depends on the activity or process that is performed, thereby deduce that the main cost of the final product of a development project software project is undoubtedly its size. In this paper a comparative study of common models for estimating costs are developed. These models are used today in order to create a structured analysis to provide the necessary information about cost, time and effort for making decisions in a software development project. Finally the models are applied to a case study, which is a system called Monitorizacion Automatica del Sistema de Gestion de Calidad del Reactor TRIGA Mark III. (author)

  5. Estimation of total Effort and Effort Elapsed in Each Step of Software Development Using Optimal Bayesian Belief Network

    Directory of Open Access Journals (Sweden)

    Fatemeh Zare Baghiabad

    2017-09-01

    Full Text Available Accuracy in estimating the needed effort for software development caused software effort estimation to be a challenging issue. Beside estimation of total effort, determining the effort elapsed in each software development step is very important because any mistakes in enterprise resource planning can lead to project failure. In this paper, a Bayesian belief network was proposed based on effective components and software development process. In this model, the feedback loops are considered between development steps provided that the return rates are different for each project. Different return rates help us determine the percentages of the elapsed effort in each software development step, distinctively. Moreover, the error measurement resulted from optimized effort estimation and the optimal coefficients to modify the model are sought. The results of the comparison between the proposed model and other models showed that the model has the capability to highly accurately estimate the total effort (with the marginal error of about 0.114 and to estimate the effort elapsed in each software development step.

  6. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  7. Estimating carbon stock in secondary forests

    DEFF Research Database (Denmark)

    Breugel, Michiel van; Ransijn, Johannes; Craven, Dylan

    2011-01-01

    of trees and species for destructive biomass measurements. We assess uncertainties associated with these decisions using data from 94 secondary forest plots in central Panama and 244 harvested trees belonging to 26 locally abundant species. AGB estimates from species-specific models were used to assess...... is the use of allometric regression models to convert forest inventory data to estimates of aboveground biomass (AGB). The use of allometric models implies decisions on the selection of extant models or the development of a local model, the predictor variables included in the selected model, and the number...... relative errors of estimates from multispecies models. To reduce uncertainty in the estimation of plot AGB, including wood specific gravity (WSG) in the model was more important than the number of trees used for model fitting. However, decreasing the number of trees increased uncertainty of landscape...

  8. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  9. Project schedule and cost estimate report

    International Nuclear Information System (INIS)

    1988-03-01

    All cost tables represent obligation dollars, at both a constant FY 1987 level and an estimated escalation level, and are based on the FY 1989 DOE Congressional Budget submittal of December 1987. The cost tables display the total UMTRA Project estimated costs, which include both Federal and state funding. The Total Estimated Cost (TEC) for the UMTRA Project is approximately $992.5 million (in 1987 escalated dollars). Project schedules have been developed that provide for Project completion by September 1994, subject to Congressional approval extending DOE's authorization under Public Law 95-604. The report contains site-specific demographic data, conceptual design assumptions, preliminary cost estimates, and site schedules. A general project overview is also presented, which includes a discussion of the basis for the schedule and cost estimates, contingency assumptions, work breakdown structure, and potential project risks. The schedules and cost estimates will be revised as necessary to reflect appropriate decisions relating to relocation of certain tailings piles, or other special design considerations or circumstances (such as revised EPA groundwater standards), and changes in the Project mission. 27 figs', 97 tabs

  10. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    Science.gov (United States)

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  11. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    Science.gov (United States)

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  12. Strichartz estimates on $alpha$-modulation spaces

    Directory of Open Access Journals (Sweden)

    Weichao Guo

    2013-05-01

    Full Text Available In this article, we consider some dispersive equations, including Schrodinger equations, nonelliptic Schrodinger equations, and wave equations. We develop some Strichartz estimates in the frame of alpha-modulation spaces.

  13. State energy data report 1994: Consumption estimates

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.

  14. State energy data report 1994: Consumption estimates

    International Nuclear Information System (INIS)

    1996-10-01

    This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included

  15. New developments in state estimation for Nonlinear Systems

    DEFF Research Database (Denmark)

    Nørgård, Peter Magnus; Poulsen, Niels Kjølstad; Ravn, Ole

    2000-01-01

    Based on an interpolation formula, accurate state estimators for nonlinear systems can be derived. The estimators do not require derivative information which makes them simple to implement.; State estimators for nonlinear systems are derived based on polynomial approximations obtained with a mult......-known estimators, such as the extended Kalman filter (EKF) and its higher-order relatives, in most practical applications....

  16. Accuracy of dental development for estimating the pubertal growth spurt in comparison to skeletal development: a systematic review and meta-analysis.

    Science.gov (United States)

    Bittencourt, MarcosAlan Vieira; Cericato, GrazielaOro; Franco, Ademir; Girão, RafaelaSilva; Lima, Anderson Paulo Barbosa; Paranhos, LuizRenato

    2018-05-01

    This study aimed to search for scientific evidence concerning the accuracy of dental development for estimating the pubertal growth spurt. It was conducted according to the statements of PRISMA. An electronic search was performed in six databases, including the grey literature. The PICOS strategy was used to define the eligibility criteria and only observational studies were selected. Out of 1,416 identified citations, 10 articles fulfilled the criteria and were included in this systematic review. The association between dental development and skeletal maturity was considered strong in seven studies, and moderate in two, although the association with the pubertal growth spurt had been verified in only four articles. According to half of the studies, the tooth that provided the greater association with the ossification centres was the lower canine. The meta-analysis performed also indicated a positive association, being stronger in females [0.725 (0.649-0.808)]. However, when the method used for dental evaluation was considered, it was possible to verify greater correlation coefficients for Nolla [0.736 (0.666-0.814)] than for Demirjian [0.631 (0.450-0.884)], at the boys sample. The heterogeneity test reached high values (Q = 51.00), suggesting a potential bias within the studies. Most of individual studies suggested a strong correlation between dental development and skeletal maturation, although the association with the peakof pubertal growth spurtwas clearly cited only in some of them. However, due to the high heterogeneity found among the studies included in this meta-analysis, a pragmatic recommendation about the use of dental stages is not possible.

  17. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM

  18. Development of a System Analysis Toolkit for Sensitivity Analysis, Uncertainty Propagation, and Estimation of Parameter Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok; Kim, Kyung Doo [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    Statistical approaches to uncertainty quantification and sensitivity analysis are very important in estimating the safety margins for an engineering design application. This paper presents a system analysis and optimization toolkit developed by Korea Atomic Energy Research Institute (KAERI), which includes multiple packages of the sensitivity analysis and uncertainty quantification algorithms. In order to reduce the computing demand, multiple compute resources including multiprocessor computers and a network of workstations are simultaneously used. A Graphical User Interface (GUI) was also developed within the parallel computing framework for users to readily employ the toolkit for an engineering design and optimization problem. The goal of this work is to develop a GUI framework for engineering design and scientific analysis problems by implementing multiple packages of system analysis methods in the parallel computing toolkit. This was done by building an interface between an engineering simulation code and the system analysis software packages. The methods and strategies in the framework were designed to exploit parallel computing resources such as those found in a desktop multiprocessor workstation or a network of workstations. Available approaches in the framework include statistical and mathematical algorithms for use in science and engineering design problems. Currently the toolkit has 6 modules of the system analysis methodologies: deterministic and probabilistic approaches of data assimilation, uncertainty propagation, Chi-square linearity test, sensitivity analysis, and FFTBM.

  19. Developing standard transmission system for radiology reporting including key images

    International Nuclear Information System (INIS)

    Kim, Seon Chil

    2007-01-01

    Development of hospital information system and Picture Archiving Communication System is not new in the medical field, and the development of internet and information technology are also universal. In the course of such development, however, it is hard to share medical information without a refined standard format. Especially in the department of radiology, the role of PACS has become very important in interchanging information with other disparate hospital information systems. A specific system needs to be developed that radiological reports are archived into a database efficiently. This includes sharing of medical images. A model is suggested in this study in which an internal system is developed where radiologists store necessary images and transmit them is the standard international clinical format, Clinical Document Architecture, and share the information with hospitals. CDA document generator was made to generate a new file format and separate the existing storage system from the new system. This was to ensure the access to required data in XML documents. The model presented in this study added a process where crucial images in reading are inserted in the CDA radiological report generator. Therefore, this study suggests a storage and transmission model for CDA documents, which is different from the existing DICOM SR. Radiological reports could be better shared, when the application function for inserting images and the analysis of standard clinical terms are completed

  20. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  1. Development of mathematical model for estimation of entrance surface dose in mammography

    International Nuclear Information System (INIS)

    Abdelgani, Yassir Mohammed Tahir

    2013-05-01

    Computer simulation is a convenient and frequently used tool in the study of x-ray mammography, for the design of novel detector systems, the evaluation of dose deposition, x-ray technique optimization, and other applications. An important component in the simulation process is the accurate computer generation of x-ray spectra. A computer model for the generation of x-ray spectra in the mammographic energy rang from 18 keV to 40 ke V has been developed by Boone et al. Due to the lack of QC and dose measurement tools, in addition to unavailability of medical physics, a mathematical tool was developed for estimation of patient exposure and entrance dose. The proposed model require no assumptions concerning the physics of x-ray production in an x-ray tube, but rather makes use of x-ray spectra recently measured experimentally by John M Boone (Department of Radiology, University of California). Using experimental dose measurements for specific tube voltage and tube current the generated x-ray spectra were calibrated. The spectrum calibration factors show a tube voltage dependency. From the calibrated x-ray spectrum, the exposure and entrance dose were estimated for different k Vp and m A. Results show good agreement between the measured and estimated values for tube voltage between 18 to 45 k Vp with a good correlation of nearly 1 and equal slope. The maximum estimated different between the measured and the simulated dose is approximately equal to 0.07%.(Author)

  2. When is enough, enough? Identifying predictors of capacity estimates for onshore wind-power development in a region of the UK

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Christopher R., E-mail: c.r.jones@shef.ac.uk [Department of Psychology, University of Sheffield, Western Bank, Sheffield, S10 2TP (United Kingdom); Orr, Barry J.; Eiser, J. Richard [Department of Psychology, University of Sheffield, Western Bank, Sheffield, S10 2TP (United Kingdom)

    2011-08-15

    The level of 'wind-prospecting' presently occurring in the UK is increasing the likelihood that new wind-power developments will conflict with other existing and/or proposed schemes. This study reports multiple-regression analyses performed on survey data obtained in a region of the UK (i.e. Humberhead Levels, near Doncaster) simultaneously subject to nine wind-farm proposals (September 2008). The aim of the analysis was to identify which survey-items were predictors of respondents' estimates of the number of wind turbines they believed the region could reasonably support (i.e. capacity estimates). The results revealed that the majority of respondents would endorse some local development; however, there was substantial variability in the upper level that was considered acceptable. Prominent predictors included general attitude, perceived knowledge of wind power, community attachment, environmental values, visual attractiveness of wind turbines, and issues relating to perceived fairness and equity. The results have implications for Cumulative Effects Assessment (CEA) - and in particular the assessment of Cumulative Landscape and Visual Impacts (CLVI) - and support calls for greater community involvement in decisions regarding proposed schemes. - Highlights: > Research seeks to identify predictors of the scale of local wind development people will tolerate. > Research conducted in region of the UK subject to nine wind-farm applications (2008). > Predictors found to include issues of perceived fairness and equity. > Results hold implications for cumulative effects assessment and development practices.

  3. When is enough, enough? Identifying predictors of capacity estimates for onshore wind-power development in a region of the UK

    International Nuclear Information System (INIS)

    Jones, Christopher R.; Orr, Barry J.; Eiser, J. Richard

    2011-01-01

    The level of 'wind-prospecting' presently occurring in the UK is increasing the likelihood that new wind-power developments will conflict with other existing and/or proposed schemes. This study reports multiple-regression analyses performed on survey data obtained in a region of the UK (i.e. Humberhead Levels, near Doncaster) simultaneously subject to nine wind-farm proposals (September 2008). The aim of the analysis was to identify which survey-items were predictors of respondents' estimates of the number of wind turbines they believed the region could reasonably support (i.e. capacity estimates). The results revealed that the majority of respondents would endorse some local development; however, there was substantial variability in the upper level that was considered acceptable. Prominent predictors included general attitude, perceived knowledge of wind power, community attachment, environmental values, visual attractiveness of wind turbines, and issues relating to perceived fairness and equity. The results have implications for Cumulative Effects Assessment (CEA) - and in particular the assessment of Cumulative Landscape and Visual Impacts (CLVI) - and support calls for greater community involvement in decisions regarding proposed schemes. - Highlights: → Research seeks to identify predictors of the scale of local wind development people will tolerate. → Research conducted in region of the UK subject to nine wind-farm applications (2008). → Predictors found to include issues of perceived fairness and equity. → Results hold implications for cumulative effects assessment and development practices.

  4. An overview of J estimation schemes developed for the RSE-M code

    International Nuclear Information System (INIS)

    Delliou, Patrick Le; Sermage, Jean-Philippe; Barthelet, Bruno; Michel, Bruno; Gilles, Philippe

    2003-01-01

    The RSE-M Code provides rules and requirements for in-service inspection of French Pressurized Water Reactor power plant components. The RSE-M Code gives non mandatory guidance for analytical evaluation of flaws. To calculate the stress intensity factors in pipes and shells containing semi-elliptical surface defects, influence coefficients are given for a wide range of geometrical parameters. To calculate the J integral for surface cracks in pipes and elbows, simplified methods have been developed for mechanical loads (in-plane bending and torsion moments, pressure), thermal loads as well as for the combination of these loads. This paper presents an overview of the J-estimation schemes presently available: a circumferential surface crack in a straight pipe (already included in the 2000 Addenda of the Code), a circumferential surface crack in a tapered transition, a longitudinal surface crack in a straight pipe, a longitudinal surface crack in the mid-section of an elbow. (author)

  5. Contributions of national and global health estimates to monitoring health-related sustainable development goals.

    Science.gov (United States)

    Bundhamcharoen, Kanitta; Limwattananon, Supon; Kusreesakul, Khanitta; Tangcharoensathien, Viroj

    2016-01-01

    The millennium development goals triggered an increased demand for data on child and maternal mortalities for monitoring progress. With the advent of the sustainable development goals and growing evidence of an epidemiological transition toward non-communicable diseases, policymakers need data on mortality and disease trends and distribution to inform effective policies and support monitoring progress. Where there are limited capacities to produce national health estimates (NHEs), global health estimates (GHEs) can fill gaps for global monitoring and comparisons. This paper discusses lessons learned from Thailand's burden of disease (BOD) study on capacity development on NHEs and discusses the contributions and limitations of GHEs in informing policies at the country level. Through training and technical support by external partners, capacities are gradually strengthened and institutionalized to enable regular updates of BOD at national and subnational levels. Initially, the quality of cause-of-death reporting in death certificates was inadequate, especially for deaths occurring in the community. Verbal autopsies were conducted, using domestic resources, to determine probable causes of deaths occurring in the community. This method helped to improve the estimation of years of life lost. Since the achievement of universal health coverage in 2002, the quality of clinical data on morbidities has also considerably improved. There are significant discrepancies between the Global Burden of Disease 2010 study estimates for Thailand and the 1999 nationally generated BOD, especially for years of life lost due to HIV/AIDS, and the ranking of priority diseases. National ownership of NHEs and an effective interface between researchers and decision-makers contribute to enhanced country policy responses, whereas subnational data are intended to be used by various subnational partners. Although GHEs contribute to benchmarking country achievement compared with global health

  6. Estimation of physical development of young sportsmen from traditional and modern positions

    Directory of Open Access Journals (Sweden)

    Khor'yakov V.A.

    2012-12-01

    Full Text Available The problem of evaluation of anthropometric status of young sportsmen is examined with the use of method of indexes and modern pictures of somatic health of man. In research young boxers took part 10-11 (n=41, 12-13 (n=48 and 14-16 years (n=39. Contradiction and ambiguousness of estimations of physical development of children and teenagers is rotined by means of traditional indexes of Erismana, Quetelet, Pin'e, sthenic and development of thorax. It is marked that an estimation of physical development of children and teenagers with the use of standard deviation of selection is not productive, because in most cases distributing of the studied signs falls short of a normal law. A concept «norm» is recommended to replace a concept «norm» as an obligatory requirement of the state to the level of somatic health of children and teenagers of different regions of country. It is marked that it is expedient to examine physical development of individuals as a structural element of bodily condition the major components of which are indexes of power and capacity of mechanisms of energy supply.

  7. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    Science.gov (United States)

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  8. Estimation of environment-related properties of chemicals for design of sustainable processes: Development of group-contribution+ (GC+) models and uncertainty analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Kalakul, Sawitree; Sarup, Bent

    2012-01-01

    The aim of this work is to develop group-3 contribution+ (GC+)method (combined group-contribution (GC) method and atom connectivity index (CI)) based 15 property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated...... property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality......, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22...

  9. Calliphora vicina (Diptera: Calliphoridae) pupae: a timeline of external morphological development and a new age and PMI estimation tool.

    Science.gov (United States)

    Brown, Katherine; Thorne, Alan; Harvey, Michelle

    2015-07-01

    The minimum postmortem interval (PMI(min)) is commonly estimated using calliphorid larvae, for which there are established age estimation methods based on morphological and development data. Despite the increased duration and sedentary nature of the pupal stage of the blowfly, morphological age estimation methods are poorly documented and infrequently used for PMI determination. The aim of this study was to develop a timeline of metamorphosis, focusing on the development of external morphology (within the puparium), to provide a means of age and PMI estimation for Calliphora vicina (Rob-Desvoidy) pupae. Under controlled conditions, 1,494 pupae were reared and sampled at regular time intervals. After puparium removal, observations of 23 external metamorphic developments were correlated to age in accumulated degree hours (ADH). Two age estimation methods were developed based on (1) the combination of possible age ranges observed for each characteristic and (2) regression analyses to generate age estimation equations employing all 23 characteristics observed and a subset of ten characteristics most significantly correlated with age. Blind sample analysis indicated that, using the combination of both methods, pupal age could be estimated to within ±500 ADH with 95% reliability.

  10. SU-E-T-95: Delivery Time Estimator

    International Nuclear Information System (INIS)

    Kantor, M; Balter, P; Ohrt, J

    2014-01-01

    Purpose: The development and testing of a tool for the inclusion of delivery time as a parameter in plan optimization. Methods: We developed an algorithm that estimates the time required for the machine and personnel movements required to deliver a treatment plan on a linear accelerator. We included dose rate, leaf motion, collimator motion, gantry motion, and couch motions (including time to enter the room to rotate the couch safely). Vault-specific parameters to account for time to enter and perform couch angle adjustments were also included. This algorithm works for static, step and shoot IMRT, and VMAT beams photon beams and for fixed electron beams. This was implemented as a script in our treatment planning system. We validated the estimator against actual recorded delivery time from our R and V system as well as recorded times from our IMRT QA delivery. Results: Data was collected (Figure 1) for 12 treatment plans by examining the R and V beam start times, and by manually timing the QA treatment for a reference, but the QA measurements were only significant to the nearest minute. The average difference between the estimated and R and V times was 15%, and 11% when excluding the major outliers. Outliers arose due to respiratory aides and gating techniques which could not be accounted for in the estimator. Conclusion: Non-mechanical factors such as the time a therapist needs to walk in and out of the room to adjust the couch needed to be fine-tuned and cycled back into the algorithm to improve the estimate. The algorithm has been demonstrated to provide reasonable and useful estimates for delivery time. This estimate has provided a useful additional input for clinical decision-making when comparing several potential radiation treatment options

  11. Cost-estimate guidelines for advanced nuclear power technologies

    International Nuclear Information System (INIS)

    Delene, J.G.; Hudson, C.R.

    1993-01-01

    Various advanced power plant concepts are currently under development. These include several advanced light water reactors as well as the modular high-temperature gas-cooled reactor and the advanced liquid-metal reactor. One measure-of the attractiveness of a new concept is cost. Invariably, the cost of a new type of power plant will be compared with other alternative forms of electric generation. In order to make reasonable comparative assessments of competing technologies, consistent ground rules and assumptions must be applied when developing cost estimates. This paper describes the cost-estimate guidelines developed by Oak Ridge National Laboratory for the U.S. Department of Energy (DOE) to be used in developing cost estimates for the advanced nuclear reactors and how these guidelines relate to the DOE cost verification process

  12. Development of transmission dose estimation algorithm for in vivo dosimetry in high energy radiation treatment

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2004-01-01

    In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)

  13. Estimation and Control in Agile Methods for Software Development: a Case Study

    Directory of Open Access Journals (Sweden)

    Mitre-Hernández Hugo A.

    2014-07-01

    Full Text Available The development of software (SW using agile methods is growing due to the productivity associated with these methodologies, in addition to the flexibility shown in small teams. However, these methods have clear weaknesses of software development in cost estimation and management, as well as the fact that project managers do not have enough evidence to verify the budget spending on a project due to the poor documentation generated and the lack of monitoring of resource spending. A proposal estimation and cost control in agile methods to solve these shortcomings. To this end, a case study was conducted in an agile software development company using the proposal for Software as a Service (SaaS and Web application projects. The results found were that the proposal generates a high degree of evidence for project managers, but it has shortcomings in the administration of the evidence for the control and decision making, which led to a definition of a decision making process to be coupled with the measurement proposal.

  14. Development of Methodologies for the Estimation of Thermal Properties Associated with Aerospace Vehicles

    Science.gov (United States)

    Scott, Elaine P.

    1996-01-01

    A thermal stress analysis is an important aspect in the design of aerospace structures and vehicles such as the High Speed Civil Transport (HSCT) at the National Aeronautics and Space Administration Langley Research Center (NASA-LaRC). These structures are complex and are often composed of numerous components fabricated from a variety of different materials. The thermal loads on these structures induce temperature variations within the structure, which in turn result in the development of thermal stresses. Therefore, a thermal stress analysis requires knowledge of the temperature distributions within the structures which consequently necessitates the need for accurate knowledge of the thermal properties, boundary conditions and thermal interface conditions associated with the structural materials. The goal of this proposed multi-year research effort was to develop estimation methodologies for the determination of the thermal properties and interface conditions associated with aerospace vehicles. Specific objectives focused on the development and implementation of optimal experimental design strategies and methodologies for the estimation of thermal properties associated with simple composite and honeycomb structures. The strategy used in this multi-year research effort was to first develop methodologies for relatively simple systems and then systematically modify these methodologies to analyze complex structures. This can be thought of as a building block approach. This strategy was intended to promote maximum usability of the resulting estimation procedure by NASA-LARC researchers through the design of in-house experimentation procedures and through the use of an existing general purpose finite element software.

  15. Development of an Advanced Grid-Connected PV-ECS System Considering Solar Energy Estimation

    Science.gov (United States)

    Rahman, Md. Habibur; Yamashiro, Susumu; Nakamura, Koichi

    In this paper, the development and the performance of a viable distributed grid-connected power generation system of Photovoltaic-Energy Capacitor System (PV-ECS) considering solar energy estimation have been described. Instead of conventional battery Electric Double Layer Capacitors (EDLC) are used as storage device and Photovoltaic (PV) panel to generate power from solar energy. The system can generate power by PV, store energy when the demand of load is low and finally supply the stored energy to load during the period of peak demand. To realize the load leveling function properly the system will also buy power from grid line when load demand is high. Since, the power taken from grid line depends on the PV output power, a procedure has been suggested to estimate the PV output power by calculating solar radiation. In order to set the optimum value of the buy power, a simulation program has also been developed. Performance of the system has been studied for different load patterns in different weather conditions by using the estimated PV output power with the help of the simulation program.

  16. Schistosomiasis and water resources development: systematic review, meta-analysis, and estimates of people at risk.

    Science.gov (United States)

    Steinmann, Peter; Keiser, Jennifer; Bos, Robert; Tanner, Marcel; Utzinger, Jürg

    2006-07-01

    An estimated 779 million people are at risk of schistosomiasis, of whom 106 million (13.6%) live in irrigation schemes or in close proximity to large dam reservoirs. We identified 58 studies that examined the relation between water resources development projects and schistosomiasis, primarily in African settings. We present a systematic literature review and meta-analysis with the following objectives: (1) to update at-risk populations of schistosomiasis and number of people infected in endemic countries, and (2) to quantify the risk of water resources development and management on schistosomiasis. Using 35 datasets from 24 African studies, our meta-analysis showed pooled random risk ratios of 2.4 and 2.6 for urinary and intestinal schistosomiasis, respectively, among people living adjacent to dam reservoirs. The risk ratio estimate for studies evaluating the effect of irrigation on urinary schistosomiasis was in the range 0.02-7.3 (summary estimate 1.1) and that on intestinal schistosomiasis in the range 0.49-23.0 (summary estimate 4.7). Geographic stratification showed important spatial differences, idiosyncratic to the type of water resources development. We conclude that the development and management of water resources is an important risk factor for schistosomiasis, and hence strategies to mitigate negative effects should become integral parts in the planning, implementation, and operation of future water projects.

  17. Retrofitting the Low Impact Development Practices into Developed Urban areas Including Barriers and Potential Solution

    Science.gov (United States)

    Shafique, Muhammad; Kim, Reeho

    2017-06-01

    Low impact development (LID)/green infrastructure (GI) practices have been identified as the sustainable practices of managing the stormwater in urban areas. Due to the increasing population, most of the cities are more developing which results in the change of natural area into impervious areas (roads, buildings etc.). Moreover, urbanization and climate change are causing many water-related problems and making over cities unsafe and insecure. Under these circumstances, there is a need to introduce new stormwater management practices into developed cities to reduce the adverse impacts of urbanization. For this purpose, retrofitting low impact development practices demands more attention to reduce these water-related problems and trying to make our cities sustainable. In developed areas, there is a little space is available for the retrofitting of LID practices for the stormwater management. Therefore, the selection of an appropriate place to retrofitting LID practices needs more concern. This paper describes the successfully applied retrofitting LID practices around the globe. It also includes the process of applying retrofitting LID practices at the suitable place with the suitable combination. Optimal places for the retrofitting of different LID practices are also mentioned. This paper also highlights the barriers and potential solutions of retrofitting LID practices in urban areas.

  18. Retrofitting the Low Impact Development Practices into Developed Urban areas Including Barriers and Potential Solution

    Directory of Open Access Journals (Sweden)

    Shafique Muhammad

    2017-06-01

    Full Text Available Low impact development (LID/green infrastructure (GI practices have been identified as the sustainable practices of managing the stormwater in urban areas. Due to the increasing population, most of the cities are more developing which results in the change of natural area into impervious areas (roads, buildings etc.. Moreover, urbanization and climate change are causing many water-related problems and making over cities unsafe and insecure. Under these circumstances, there is a need to introduce new stormwater management practices into developed cities to reduce the adverse impacts of urbanization. For this purpose, retrofitting low impact development practices demands more attention to reduce these water-related problems and trying to make our cities sustainable. In developed areas, there is a little space is available for the retrofitting of LID practices for the stormwater management. Therefore, the selection of an appropriate place to retrofitting LID practices needs more concern. This paper describes the successfully applied retrofitting LID practices around the globe. It also includes the process of applying retrofitting LID practices at the suitable place with the suitable combination. Optimal places for the retrofitting of different LID practices are also mentioned. This paper also highlights the barriers and potential solutions of retrofitting LID practices in urban areas.

  19. A Posteriori Error Estimates Including Algebraic Error and Stopping Criteria for Iterative Solvers

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Strakoš, Zdeněk; Vohralík, M.

    2010-01-01

    Roč. 32, č. 3 (2010), s. 1567-1590 ISSN 1064-8275 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GP201/09/P464 Institutional research plan: CEZ:AV0Z10300504 Keywords : second-order elliptic partial differential equation * finite volume method * a posteriori error estimates * iterative methods for linear algebraic systems * conjugate gradient method * stopping criteria Subject RIV: BA - General Mathematics Impact factor: 3.016, year: 2010

  20. Developing a module for estimating climate warming effects on hydropower pricing in California

    International Nuclear Information System (INIS)

    Guégan, Marion; Uvo, Cintia B.; Madani, Kaveh

    2012-01-01

    Climate warming is expected to alter hydropower generation in California through affecting the annual stream-flow regimes and reducing snowpack. On the other hand, increased temperatures are expected to increase hydropower demand for cooling in warm periods while decreasing demand for heating in winter, subsequently altering the annual hydropower pricing patterns. The resulting variations in hydropower supply and pricing regimes necessitate changes in reservoir operations to minimize the revenue losses from climate warming. Previous studies in California have only explored the effects of hydrological changes on hydropower generation and revenues. This study builds a long-term hydropower pricing estimation tool, based on artificial neural network (ANN), to develop pricing scenarios under different climate warming scenarios. Results suggest higher average hydropower prices under climate warming scenarios than under historical climate. The developed tool is integrated with California's Energy-Based Hydropower Optimization Model (EBHOM) to facilitate simultaneous consideration of climate warming on hydropower supply, demand and pricing. EBHOM estimates an additional 5% drop in annual revenues under a dry warming scenario when climate change impacts on pricing are considered, with respect to when such effects are ignored, underlining the importance of considering changes in hydropower demand and pricing in future studies and policy making. - Highlights: ► Addressing the major gap in previous climate change and hydropower studies in California. ► Developing an ANN-based long-term hydropower price estimation tool. ► Estimating climate change effects on hydropower demand and pricing in California. ► Investigating the sensitivity of hydropower operations to future price changes. ► Underlining the importance of consideration of climate change impacts on electricity pricing.

  1. Development and evaluation of neural network models to estimate daily solar radiation at Córdoba, Argentina

    International Nuclear Information System (INIS)

    Bocco, M.

    2006-01-01

    The objective of this work was to develop neural network models of backpropagation type to estimate solar radiation based on extraterrestrial radiation data, daily temperature range, precipitation, cloudiness and relative sunshine duration. Data from Córdoba, Argentina, were used for development and validation. The behaviour and adjustment between values observed and estimates obtained by neural networks for different combinations of input were assessed. These estimations showed root mean square error between 3.15 and 3.88 MJ m -2 d -1 . The latter corresponds to the model that calculates radiation using only precipitation and daily temperature range. In all models, results show good adjustment to seasonal solar radiation. These results allow inferring the adequate performance and pertinence of this methodology to estimate complex phenomena, such as solar radiation [pt

  2. Prenatal parental separation and body weight, including development of overweight and obesity later in childhood.

    Directory of Open Access Journals (Sweden)

    Lena Hohwü

    Full Text Available Early parental separation may be a stress factor causing a long-term alteration in the hypothalamic-pituitary-adrenal-axis activity possibly impacting on the susceptibility to develop overweight and obesity in offspring. We aimed to examine the body mass index (BMI and the risk of overweight and obesity in children whose parents lived separately before the child was born.A follow-up study was conducted using data from the Aarhus Birth Cohort in Denmark and included 2876 children with measurements of height and weight at 9-11-years-of-age, and self-reported information on parental cohabitation status at child birth and at 9-11-years-of-age. Quantile regression was used to estimate the difference in median BMI between children whose parents lived separately (n = 124 or together (n = 2752 before the birth. We used multiple logistic regression to calculate odds ratio (OR for overweight and obesity, adjusted for gender, parity, breast feeding status, and maternal pre-pregnancy BMI, weight gain during pregnancy, age and educational level at child birth; with and without possible intermediate factors birth weight and maternal smoking during pregnancy. Due to a limited number of obese children, OR for obesity was adjusted for the a priori confounder maternal pre-pregnancy BMI only.The difference in median BMI was 0.54 kg/m2 (95% confidence intervals (CI: 0.10; 0.98 between children whose parents lived separately before birth and children whose parents lived together. The risk of overweight and obesity was statistically significantly increased in children whose parents lived separately before the birth of the child; OR 2.29 (95% CI: 1.18; 4.45 and OR 2.81 (95% CI: 1.05; 7.51, respectively. Additional, adjustment for possible intermediate factors did not substantially change the estimates.Parental separation before child birth was associated with higher BMI, and increased risk of overweight and obesity in 9-11-year-old children; this may suggest a fetal

  3. Development of radioactive 166Ho-coated balloon and its dose estimation

    International Nuclear Information System (INIS)

    Park, K. B.; Kim, K. H.; Hong, Y. D.; Park, E. W.

    2000-01-01

    The use of balloon with radioisotope is a promising method to prevent restenosis after transluminal coronary arterial angioplasty or stent implantation. In this study, we have developed a new radioactive coated balloon, which is prepared by coating the surface of existing balloon with 166 Ho instead of being filled with beta sources which emit high energy beta-particles for the purpose of the delivery of sufficient radiation to the vessel wall. To estimate the safety of 166 Ho-coated balloon, leaching test and radiation resistance test of the balloon were performed. The absorbed dose distributions around the 166 Ho-coated balloon were estimated by means of Monte Carlo simulation and the initial activities for optimal therapeutic regimen were determined on the basis of this results

  4. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

    Science.gov (United States)

    Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

    2018-03-01

    A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. AX Tank Farm waste retrieval alternatives cost estimates

    International Nuclear Information System (INIS)

    Krieg, S.A.

    1998-01-01

    This report presents the estimated costs associated with retrieval of the wastes from the four tanks in AX Tank Farm. The engineering cost estimates developed for this report are based on previous cost data prepared for Project W-320 and the HTI 241-C-106 Heel Retrieval System. The costs presented in this report address only the retrieval of the wastes from the four AX Farm tanks. This includes costs for equipment procurement, fabrication, installation, and operation to retrieve the wastes. The costs to modify the existing plant equipment and systems to support the retrieval equipment are also included. The estimates do not include operational costs associated with pumping the waste out of the waste receiver tank (241-AY-102) between AX Farm retrieval campaigns or transportation, processing, and disposal of the retrieved waste

  6. CHANNEL ESTIMATION TECHNIQUE

    DEFF Research Database (Denmark)

    2015-01-01

    A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....

  7. Estimation of Global 1km-grid Terrestrial Carbon Exchange Part I: Developing Inputs and Modelling

    Science.gov (United States)

    Sasai, T.; Murakami, K.; Kato, S.; Matsunaga, T.; Saigusa, N.; Hiraki, K.

    2015-12-01

    Global terrestrial carbon cycle largely depends on a spatial pattern in land cover type, which is heterogeneously-distributed over regional and global scales. However, most studies, which aimed at the estimation of carbon exchanges between ecosystem and atmosphere, remained within several tens of kilometers grid spatial resolution, and the results have not been enough to understand the detailed pattern of carbon exchanges based on ecological community. Improving the sophistication of spatial resolution is obviously necessary to enhance the accuracy of carbon exchanges. Moreover, the improvement may contribute to global warming awareness, policy makers and other social activities. In this study, we show global terrestrial carbon exchanges (net ecosystem production, net primary production, and gross primary production) with 1km-grid resolution. As methodology for computing the exchanges, we 1) developed a global 1km-grid climate and satellite dataset based on the approach in Setoyama and Sasai (2013); 2) used the satellite-driven biosphere model (Biosphere model integrating Eco-physiological And Mechanistic approaches using Satellite data: BEAMS) (Sasai et al., 2005, 2007, 2011); 3) simulated the carbon exchanges by using the new dataset and BEAMS by the use of a supercomputer that includes 1280 CPU and 320 GPGPU cores (GOSAT RCF of NIES). As a result, we could develop a global uniform system for realistically estimating terrestrial carbon exchange, and evaluate net ecosystem production in each community level; leading to obtain highly detailed understanding of terrestrial carbon exchanges.

  8. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  9. Estimation of Genetic Variance Components Including Mutation and Epistasis using Bayesian Approach in a Selection Experiment on Body Weight in Mice

    DEFF Research Database (Denmark)

    Widyas, Nuzul; Jensen, Just; Nielsen, Vivi Hunnicke

    Selection experiment was performed for weight gain in 13 generations of outbred mice. A total of 18 lines were included in the experiment. Nine lines were allotted to each of the two treatment diets (19.3 and 5.1 % protein). Within each diet three lines were selected upwards, three lines were...... selected downwards and three lines were kept as controls. Bayesian statistical methods are used to estimate the genetic variance components. Mixed model analysis is modified including mutation effect following the methods by Wray (1990). DIC was used to compare the model. Models including mutation effect...... have better fit compared to the model with only additive effect. Mutation as direct effect contributes 3.18% of the total phenotypic variance. While in the model with interactions between additive and mutation, it contributes 1.43% as direct effect and 1.36% as interaction effect of the total variance...

  10. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    Science.gov (United States)

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  11. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  12. Brand market positions estimation and defining the strategic targets of its development

    OpenAIRE

    S.M. Makhnusha

    2010-01-01

    In this article the author generalizes the concept of brand characteristics which influenceits profitability and market positions. An approach to brand market positions estimation anddefining the strategic targets of its development is proposed.Keywords: brand, brand expansion, brand extension, brand value, brand power, brandrelevance, brand awareness.

  13. Unsaturated Seepage Analysis of Cracked Soil including Development Process of Cracks

    Directory of Open Access Journals (Sweden)

    Ling Cao

    2016-01-01

    Full Text Available Cracks in soil provide preferential pathways for water flow and their morphological parameters significantly affect the hydraulic conductivity of the soil. To study the hydraulic properties of cracks, the dynamic development of cracks in the expansive soil during drying and wetting has been measured in the laboratory. The test results enable the development of the relationships between the cracks morphological parameters and the water content. In this study, the fractal model has been used to predict the soil-water characteristic curve (SWCC of the cracked soil, including the developmental process of the cracks. The cracked expansive soil has been considered as a crack-pore medium. A dual media flow model has been developed to simulate the seepage characteristics of the cracked expansive soil. The variations in pore water pressure at different part of the model are quite different due to the impact of the cracks. This study proves that seepage characteristics can be better predicted if the impact of cracks is taken into account.

  14. Nuclear data evaluation methodology including estimates of covariances

    Directory of Open Access Journals (Sweden)

    Smith D.L.

    2010-10-01

    Full Text Available Evaluated nuclear data rather than raw experimental and theoretical information are employed in nuclear applications such as the design of nuclear energy systems. Therefore, the process by which such information is produced and ultimately used is of critical interest to the nuclear science community. This paper provides an overview of various contemporary methods employed to generate evaluated cross sections and related physical quantities such as particle emission angular distributions and energy spectra. The emphasis here is on data associated with neutron induced reaction processes, with consideration of the uncertainties in these data, and on the more recent evaluation methods, e.g., those that are based on stochastic (Monte Carlo techniques. There is no unique way to perform such evaluations, nor are nuclear data evaluators united in their opinions as to which methods are superior to the others in various circumstances. In some cases it is not critical which approaches are used as long as there is consistency and proper use is made of the available physical information. However, in other instances there are definite advantages to using particular methods as opposed to other options. Some of these distinctions are discussed in this paper and suggestions are offered regarding fruitful areas for future research in the development of evaluation methodology.

  15. Development process of muzzle flows including a gun-launched missile

    Directory of Open Access Journals (Sweden)

    Zhuo Changfei

    2015-04-01

    Full Text Available Numerical investigations on the launch process of a gun-launched missile from the muzzle of a cannon to the free-flight stage have been performed in this paper. The dynamic overlapped grids approach are applied to dealing with the problems of a moving gun-launched missile. The high-resolution upwind scheme (AUSMPW+ and the detailed reaction kinetics model are adopted to solve the chemical non-equilibrium Euler equations for dynamic grids. The development process and flow field structure of muzzle flows including a gun-launched missile are discussed in detail. This present numerical study confirms that complicated transient phenomena exist in the shortly launching stages when the gun-launched missile moves from the muzzle of a cannon to the free-flight stage. The propellant gas flows, the initial environmental ambient air flows and the moving missile mutually couple and interact. A complete structure of flow field is formed at the launching stages, including the blast wave, base shock, reflected shock, incident shock, shear layer, primary vortex ring and triple point.

  16. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    Science.gov (United States)

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  17. Accuracy of an equation for estimating age from mandibular third molar development in a Thai population

    Energy Technology Data Exchange (ETDEWEB)

    Verochana, Karune; Prapayasatok, Sangsom; Janhom, Apirum; Mahasantipiya, Phattaranant May; Korwanich, Narumanas [Faculty of Dentistry, Chiang Mai University, Chiang Mai (Thailand)

    2016-03-15

    This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age.

  18. Accuracy of an equation for estimating age from mandibular third molar development in a Thai population

    International Nuclear Information System (INIS)

    Verochana, Karune; Prapayasatok, Sangsom; Janhom, Apirum; Mahasantipiya, Phattaranant May; Korwanich, Narumanas

    2016-01-01

    This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age

  19. Accuracy of an equation for estimating age from mandibular third molar development in a Thai population.

    Science.gov (United States)

    Verochana, Karune; Prapayasatok, Sangsom; Janhom, Apirum; Mahasantipiya, Phattaranant May; Korwanich, Narumanas

    2016-03-01

    This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age.

  20. Cost Estimating in DoD: Current Status, Trends, and What the Future Holds

    National Research Council Canada - National Science Library

    Nussbaum, Daniel A

    2005-01-01

    (1) Current Status: Baseline analysis of current cost estimating community, including which organizations are responsible for developing and reviewing cost estimates, how many personnel there are, what...

  1. Estimating national water use associated with unconventional oil and gas development

    Science.gov (United States)

    Carter, Janet M.; Macek-Rowland, Kathleen M.; Thamke, Joanna N.; Delzer, Gregory C.

    2016-05-18

    The U.S. Geological Survey’s (USGS) Water Availability and Use Science Program (WAUSP) goals are to provide a more accurate assessment of the status of the water resources of the United States and assist in the determination of the quantity and quality of water that is available for beneficial uses. These assessments would identify long-term trends or changes in water availability since the 1950s in the United States and help to develop the basis for an improved ability to forecast water avail- ability for future economic, energy-production, and environmental uses. The National Water Census (http://water.usgs.gov/watercensus/), a research program of the WAUSP, supports studies to develop new water accounting tools and assess water availability at the regional and national scales. Studies supported by this program target focus areas with identified water availability concerns and topical science themes related to the use of water within a specific type of environmental setting. The topical study described in this fact sheet will focus on understanding the relation between production of unconventional oil and gas (UOG) for energy and the water needed to produce and sustain this type of energy development. This relation applies to the life-cycle of renewable and nonrenewable forms of UOG energy and includes extraction, production, refinement, delivery, and disposal of waste byproducts. Water-use data and models derived from this topical study will be applied to other similar oil and gas plays within the United States to help resource managers assess and account for water used or needed in these areas. Additionally, the results from this topical study will be used to further refine the methods used in compiling water-use data for selected categories (for example, mining, domestic self-supplied, public supply, and wastewater) in the USGS’s 5-year national water-use estimates reports (http://water.usgs.gov/watuse/).

  2. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  3. Multiple imputation for estimating the risk of developing dementia and its impact on survival.

    Science.gov (United States)

    Yu, Binbing; Saczynski, Jane S; Launer, Lenore

    2010-10-01

    Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.

  4. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    International Nuclear Information System (INIS)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K.

    2016-01-01

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  5. Infant bone age estimation based on fibular shaft length: model development and clinical validation

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Andy; Stamoulis, Catherine; Bixby, Sarah D.; Breen, Micheal A.; Connolly, Susan A.; Kleinman, Paul K. [Boston Children' s Hospital, Harvard Medical School, Department of Radiology, Boston, MA (United States)

    2016-03-15

    Bone age in infants (<1 year old) is generally estimated using hand/wrist or knee radiographs, or by counting ossification centers. The accuracy and reproducibility of these techniques are largely unknown. To develop and validate an infant bone age estimation technique using fibular shaft length and compare it to conventional methods. We retrospectively reviewed negative skeletal surveys of 247 term-born low-risk-of-abuse infants (no persistent child protection team concerns) from July 2005 to February 2013, and randomized them into two datasets: (1) model development (n = 123) and (2) model testing (n = 124). Three pediatric radiologists measured all fibular shaft lengths. An ordinary linear regression model was fitted to dataset 1, and the model was evaluated using dataset 2. Readers also estimated infant bone ages in dataset 2 using (1) the hemiskeleton method of Sontag, (2) the hemiskeleton method of Elgenmark, (3) the hand/wrist atlas of Greulich and Pyle, and (4) the knee atlas of Pyle and Hoerr. For validation, we selected lower-extremity radiographs of 114 normal infants with no suspicion of abuse. Readers measured the fibulas and also estimated bone ages using the knee atlas. Bone age estimates from the proposed method were compared to the other methods. The proposed method outperformed all other methods in accuracy and reproducibility. Its accuracy was similar for the testing and validating datasets, with root-mean-square error of 36 days and 37 days; mean absolute error of 28 days and 31 days; and error variability of 22 days and 20 days, respectively. This study provides strong support for an infant bone age estimation technique based on fibular shaft length as a more accurate alternative to conventional methods. (orig.)

  6. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  7. Reassessing Wind Potential Estimates for India: Economic and Policy Implications

    Energy Technology Data Exchange (ETDEWEB)

    Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet

    2011-09-15

    We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (no farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.

  8. Integral Criticality Estimators in MCATK

    Energy Technology Data Exchange (ETDEWEB)

    Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory

    2016-06-14

    The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.

  9. Magnetic resonance imaging of third molars: developing a protocol suitable for forensic age estimation.

    Science.gov (United States)

    De Tobel, Jannick; Hillewig, Elke; Bogaert, Stephanie; Deblaere, Karel; Verstraete, Koenraad

    2017-03-01

    Established dental age estimation methods in sub-adults study the development of third molar root apices on radiographs. In living individuals, however, avoiding ionising radiation is expedient. Studying dental development with magnetic resonance imaging complies with this requirement, adding the advantage of imaging in three dimensions. To elaborate the development of an MRI protocol to visualise all third molars for forensic age estimation, with particular attention to the development of the root apex. Ex vivo scans of porcine jaws and in vivo scans of 10 volunteers aged 17-25 years were performed to select adequate sequences. Studied parameters were T1 vs T2 weighting, ultrashort echo time (UTE), fat suppression, in plane resolution, slice thickness, 3D imaging, signal-to-noise ratio, and acquisition time. A bilateral four-channel flexible surface coil was used. Two observers evaluated the suitability of the images. T2-weighted images were preferred to T1-weighted images. To clearly distinguish root apices in (almost) fully developed third molars an in plane resolution of 0.33 × 0.33 mm 2 was deemed necessary. Taking acquisition time limits into account, only a T2 FSE sequence with slice thickness of 2 mm generated images with sufficient resolution and contrast. UTE, thinner slice T2 FSE and T2 3D FSE sequences could not generate the desired resolution within 6.5 minutes. Three Tesla MRI of the third molars is a feasible technique for forensic age estimation, in which a T2 FSE sequence can provide the desired in plane resolution within a clinically acceptable acquisition time.

  10. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  11. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  12. Cost estimating for CERCLA remedial alternatives a unit cost methodology

    International Nuclear Information System (INIS)

    Brettin, R.W.; Carr, D.J.; Janke, R.J.

    1995-06-01

    The United States Environmental Protection Agency (EPA) Guidance for Conducting Remedial Investigations and Feasibility Studies Under CERCLA, Interim Final, dated October 1988 (EPA 1988) requires a detailed analysis be conducted of the most promising remedial alternatives against several evaluation criteria, including cost. To complete the detailed analysis, order-of-magnitude cost estimates (having an accuracy of +50 percent to -30 percent) must be developed for each remedial alternative. This paper presents a methodology for developing cost estimates of remedial alternatives comprised of various technology and process options with a wide range of estimated contaminated media quantities. In addition, the cost estimating methodology provides flexibility for incorporating revisions to remedial alternatives and achieves the desired range of accuracy. It is important to note that the cost estimating methodology presented here was developed as a concurrent path to the development of contaminated media quantity estimates. This methodology can be initiated before contaminated media quantities are estimated. As a result, this methodology is useful in developing cost estimates for use in screening and evaluating remedial technologies and process options. However, remedial alternative cost estimates cannot be prepared without the contaminated media quantity estimates. In the conduct of the feasibility study for Operable Unit 5 at the Fernald Environmental Management Project (FEMP), fourteen remedial alternatives were retained for detailed analysis. Each remedial alternative was composed of combinations of remedial technologies and processes which were earlier determined to be best suited for addressing the media-specific contaminants found at the FEMP site, and achieving desired remedial action objectives

  13. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    International Nuclear Information System (INIS)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.; Salas, E.; Martin, N.

    2008-01-01

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has been included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys

  14. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  15. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    located in Region 1, 120 were located in Region 2, and 10 were located in Region 3. Streamgages located outside of Missouri were selected to extend the range of data used for the independent variables in the regression analyses. Streamgages included in the regression analyses had 10 or more years of record and were considered to be affected minimally by anthropogenic activities or trends. Regional regression analyses identified three characteristics as statistically significant for the development of regional equations. For Region 1, drainage area, longest flow path, and streamflow-variability index were statistically significant. The range in the standard error of estimate for Region 1 is 79.6 to 94.2 percent. For Region 2, drainage area and streamflow variability index were statistically significant, and the range in the standard error of estimate is 48.2 to 72.1 percent. For Region 3, drainage area and streamflow-variability index also were statistically significant with a range in the standard error of estimate of 48.1 to 96.2 percent. Limitations on the use of estimating low-flow frequency statistics at ungaged locations are dependent on the method used. The first method outlined for use in Missouri, power curve equations, were developed to estimate the selected statistics for ungaged locations on 28 selected streams with multiple streamgages located on the same stream. A second method uses a drainage-area ratio to compute statistics at an ungaged location using data from a single streamgage on the same stream with 10 or more years of record. Ungaged locations on these streams may use the ratio of the drainage area at an ungaged location to the drainage area at a streamgage location to scale the selected statistic value from the streamgage location to the ungaged location. This method can be used if the drainage area of the ungaged location is within 40 to 150 percent of the streamgage drainage area. The third method is the use of the regional regression equations

  16. Theories of estimation of differentiation for regulation of social-economic development of the city agglomeration

    OpenAIRE

    Anikina, Yu; Litovchenko, V.

    2009-01-01

    Theories of estimation of differentiation of social-economic development of territorial units in city agglomeration are discussed in the article. Approbation of the given methods helped find out successfulness of the regulation of municipal development of administrative-territorial units in Krasnoyarsk agglomeration, set the goals of regional policy on peculiarities of development of the phenomenon of differentiation.

  17. Aircraft bi-level life cycle cost estimation

    NARCIS (Netherlands)

    Zhao, X.; Verhagen, W.J.C.; Curan, R.

    2015-01-01

    n an integrated aircraft design and analysis practice, Life Cycle Cost (LCC) is essential for decision making. The LCC of an aircraft is ordinarily partially estimated by emphasizing a specific cost type. However, an overview of the LCC including design and development cost, production cost,

  18. Contributions of national and global health estimates to monitoring health-related Sustainable Development Goals in Thailand.

    Science.gov (United States)

    Bundhamcharoen, Kanitta; Limwattananon, Supon; Kusreesakul, Khanitta; Tangcharoensathien, Viroj

    2017-01-01

    The Millennium Development Goals (MDGs) triggered increased demand for data on child and maternal mortality for monitoring progress. With the advent of the Sustainable Development Goals (SDGs) and growing evidence of an epidemiological transition towards non-communicable diseases, policy makers need data on mortality and disease trends and distribution to inform effective policies and support monitoring progress. Where there are limited capacities to produce national health estimates (NHEs), global health estimates (GHEs) can fill gaps for global monitoring and comparisons. This paper draws lessons learned from Thailand's burden of disease study (BOD) on capacity development for NHEs, and discusses the contributions and limitation of GHEs in informing policies at country level. Through training and technical support by external partners, capacities are gradually strengthened and institutionalized to enable regular updates of BOD at national and sub-national levels. Initially, the quality of cause of death reporting in the death certificates was inadequate, especially for deaths occurring in the community. Verbal autopsies were conducted, using domestic resources, to determine probable causes of deaths occurring in the community. This helped improve the estimation of years of life lost. Since the achievement of universal health coverage in 2002, the quality of clinical data on morbidities has also considerably improved. There are significant discrepancies between the 2010 Global Burden of Diseases (GBD) estimates for Thailand and the 1999 nationally generated BOD, especially for years of life lost due to HIV/AIDS, and the ranking of priority diseases. National ownership of NHEs and effective interfaces between researchers and decision makers contribute to enhanced country policy responses, while sub-national data are intended to be used by various sub-national-level partners. Though GHEs contribute to benchmarking country achievement compared with global health

  19. Pose estimation for mobile robots working on turbine blade

    Energy Technology Data Exchange (ETDEWEB)

    Ma, X.D.; Chen, Q.; Liu, J.J.; Sun, Z.G.; Zhang, W.Z. [Tsinghua Univ., Beijing (China). Key Laboratory for Advanced Materials Processing Technology, Ministry of Education, Dept. of Mechanical Engineering

    2009-03-11

    This paper discussed a features point detection and matching task technique for mobile robots used in wind turbine blade applications. The vision-based scheme used visual information from the robot's surrounding environment to match successive image frames. An improved pose estimation algorithm based on a scale invariant feature transform (SIFT) was developed to consider the characteristics of local images of turbine blades, pose estimation problems, and conditions. The method included a pre-subsampling technique for reducing computation and bidirectional matching for improving precision. A random sample consensus (RANSAC) method was used to estimate the robot's pose. Pose estimation conditions included a wide pose range; the distance between neighbouring blades; and mechanical, electromagnetic, and optical disturbances. An experimental platform was used to demonstrate the validity of the proposed algorithm. 20 refs., 6 figs.

  20. Using the Optical Fractionator to Estimate Total Cell Numbers in the Normal and Abnormal Developing Human Forebrain

    DEFF Research Database (Denmark)

    Larsen, Karen B

    2017-01-01

    abnormal development. Furthermore, many studies of brain cell numbers have employed biased counting methods, whereas innovations in stereology during the past 20-30 years enable reliable and efficient estimates of cell numbers. However, estimates of cell volumes and densities in fetal brain samples...

  1. An epidemiological modelling study to estimate the composition of HIV-positive populations including migrants from endemic settings

    DEFF Research Database (Denmark)

    Nakagawa, F; Delpech, V; Albert, J

    2017-01-01

    were undiagnosed respectively. CONCLUSION: We have shown a working example to characterize the HIV population in a European context which incorporates migrants from countries with generalized epidemics. Despite all aspects of HIV care being free and widely available to anyone in need in the UK......OBJECTIVE: Migrants account for a significant number of people living with HIV in Europe, and it is important to fully consider this population in national estimates. Using a novel approach with the UK as an example, we present key public health measures of the HIV epidemic, taking into account...... of these people, 24 600 (15 000-36 200) were estimated to be undiagnosed; this number has remained stable over the last decade. An estimated 32% of the total undiagnosed population had CD4 cell count less than 350 cells/μl in 2013. Twenty-five and 23% of black African men and women heterosexuals living with HIV...

  2. An automated background estimation procedure for gamma ray spectra

    International Nuclear Information System (INIS)

    Tervo, R.J.; Kennett, T.J.; Prestwich, W.V.

    1983-01-01

    An objective and simple method has been developed to estimate the background continuum in Ge gamma ray spectra. Requiring no special procedures, the method is readily automated. Based upon the inherent statistical properties of the experimental data itself, nodes, which reflect background samples are located and used to produce an estimate of the continuum. A simple procedure to interpolate between nodes is reported and a range of rather typical experimental data is presented. All information necessary to implemented this technique is given including the relevant properties of various factors involved in its development. (orig.)

  3. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: a pilot study.

    Science.gov (United States)

    De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W

    2017-12-01

    Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was

  4. Developing a new solar radiation estimation model based on Buckingham theorem

    Science.gov (United States)

    Ekici, Can; Teke, Ismail

    2018-06-01

    While the value of solar radiation can be expressed physically in the days without clouds, this expression becomes difficult in cloudy and complicated weather conditions. In addition, solar radiation measurements are often not taken in developing countries. In such cases, solar radiation estimation models are used. Solar radiation prediction models estimate solar radiation using other measured meteorological parameters those are available in the stations. In this study, a solar radiation estimation model was obtained using Buckingham theorem. This theory has been shown to be useful in predicting solar radiation. In this study, Buckingham theorem is used to express the solar radiation by derivation of dimensionless pi parameters. This derived model is compared with temperature based models in the literature. MPE, RMSE, MBE and NSE error analysis methods are used in this comparison. Allen, Hargreaves, Chen and Bristow-Campbell models in the literature are used for comparison. North Dakota's meteorological data were used to compare the models. Error analysis were applied through the comparisons between the models in the literature and the model that is derived in the study. These comparisons were made using data obtained from North Dakota's agricultural climate network. In these applications, the model obtained within the scope of the study gives better results. Especially, in terms of short-term performance, it has been found that the obtained model gives satisfactory results. It has been seen that this model gives better accuracy in comparison with other models. It is possible in RMSE analysis results. Buckingham theorem was found useful in estimating solar radiation. In terms of long term performances and percentage errors, the model has given good results.

  5. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  6. Orion Exploration Flight Test 1 (EFT-1) Best Estimated Trajectory Development

    Science.gov (United States)

    Holt, Greg N.; Brown, Aaron

    2016-01-01

    The Orion Exploration Flight Test 1 (EFT-1) mission successfully flew on Dec 5, 2014 atop a Delta IV Heavy launch vehicle. The goal of Orions maiden flight was to stress the system by placing an uncrewed vehicle on a high-energy trajectory replicating conditions similar to those that would be experienced when returning from an asteroid or a lunar mission. The Orion navigation team combined all trajectory data from the mission into a Best Estimated Trajectory (BET) product. There were significant challenges in data reconstruction and many lessons were learned for future missions. The team used an estimation filter incorporating radar tracking, onboard sensors (Global Positioning System and Inertial Measurement Unit), and day-of-flight weather balloons to evaluate the true trajectory flown by Orion. Data was published for the entire Orion EFT-1 flight, plus objects jettisoned during entry such as the Forward Bay Cover. The BET customers include approximately 20 disciplines within Orion who will use the information for evaluating vehicle performance and influencing future design decisions.

  7. Estimating WACC for Regulated Industries on Developing Financial Markets and in Times of Market Uncertainty

    Directory of Open Access Journals (Sweden)

    Igor Stubelj

    2014-03-01

    Full Text Available The paper deals with the estimation of weighted average cost of capital (WACC for regulated industries in developing financial markets from the perspective of the current financial-economic crisis. In current financial market situation some evident changes have occurred: risk-free rates in solid and developed financial markets (e. g. USA, Germany have fallen, but due to increased market volatility, the risk premiums have increased. The latter is especially evident in transition economies where the amplitude of market volatility is extremely high. In such circumstances, there is a question of how to calculate WACC properly. WACC is an important measure in financial management decisions and in our case, business regulation. We argue in the paper that the most accurate method for calculating WACC is the estimation of the long-term WACC, which takes into consideration a long-term stable yield of capital and not the current market conditions. Following this, we propose some solutions that could be used for calculating WACC for regulated industries on the developing financial markets in times of market uncertainty. As an example, we present an estimation of the capital cost for a selected Slovenian company, which operates in the regulated industry of electric distribution.

  8. Gender, renal function, and outcomes on the liver transplant waiting list: assessment of revised MELD including estimated glomerular filtration rate.

    Science.gov (United States)

    Myers, Robert P; Shaheen, Abdel Aziz M; Aspinall, Alexander I; Quinn, Robert R; Burak, Kelly W

    2011-03-01

    The Model for End-Stage Liver Disease (MELD) allocation system for liver transplantation (LT) may present a disadvantage for women by including serum creatinine, which is typically lower in females. Our objectives were to investigate gender disparities in outcomes among LT candidates and to assess a revised MELD, including estimated glomerular filtration rate (eGFR), for predicting waiting list mortality. Adults registered for LT between 2002 and 2007 were identified using the UNOS database. We compared components of MELD, MDRD-derived eGFR, and the 3-month probability of LT and death between genders. Discrimination of MELD, MELDNa, and revised models including eGFR for mortality were compared using c-statistics. A total of 40,393 patients (36% female) met the inclusion criteria; 9% died and 24% underwent LT within 3 months of listing. Compared with men, women had lower median serum creatinine (0.9 vs. 1.0 mg/dl), eGFR (72 vs. 83 ml/min/1.73 m(2)), and mean MELD (16.5 vs. 17.2; all p discrimination for 3-month mortality (c-statistics: MELD 0.896, MELD-eGFR 0.894, MELDNa 0.911, MELDNa-eGFR 0.905). Women are disadvantaged under MELD potentially due to its inclusion of creatinine. However, since including eGFR in MELD does not improve mortality prediction, alternative refinements are necessary. Copyright © 2010 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  9. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  10. Development of internal dose calculation model and the data base updated IDES (Internal Dose Estimation System)

    International Nuclear Information System (INIS)

    Hongo, Shozo; Yamaguchi, Hiroshi; Takeshita, Hiroshi; Iwai, Satoshi.

    1994-01-01

    A computer program named IDES is developed by BASIC language for a personal computer and translated to C language of engineering work station. The IDES carries out internal dose calculations described in ICRP Publication 30 and it installs the program of transformation method which is an empirical method to estimate absorbed fractions of different physiques from ICRP Referenceman. The program consists of three tasks: productions of SAF for Japanese including children, productions of SEE, Specific Effective Energy, and calculation of effective dose equivalents. Each task and corresponding data file appear as a module so as to meet future requirement for revisions of the related data. Usefulness of IDES is discussed by exemplifying the case that 5 age groups of Japanese intake orally Co-60 or Mn-54. (author)

  11. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    Science.gov (United States)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.

  12. Condition monitoring of a motor-operated valve using estimated motor torque

    International Nuclear Information System (INIS)

    Chai, Jangbom; Kang, Shinchul; Park, Sungkeun; Hong, Sungyull; Lim, Chanwoo

    2004-01-01

    This paper is concerned with the development of data analysis methods to be used in on-line monitoring and diagnosis of Motor-Operated Valves (MOVs) effectively and accurately. The technique to be utilized includes the electrical measurements and signal processing to estimate electric torque of induction motors, which are attached to most of MOV systems. The estimated torque of an induction motor is compared with the directly measured torque using a torque cell in various loading conditions including the degraded voltage conditions to validate the estimating scheme. The accuracy of the estimating scheme is presented. The advantages of the estimated torque signatures are reviewed over the currently used ones such as the current signature and the power signature in several respects: accuracy, sensitivity, resolution and so on. Additionally, the estimated torque methods are suggested as a good way to monitor the conditions of MOVs with higher accuracy. (author)

  13. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    Science.gov (United States)

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation

  14. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  15. Radiation dose estimates for radiopharmaceuticals

    International Nuclear Information System (INIS)

    Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.

    1996-04-01

    Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms

  16. DEVELOPMENT MANAGEMENT TRANSFER PRICING BY APPLICATION OF THE INTERVAL ESTIMATES

    Directory of Open Access Journals (Sweden)

    Elena B. Shuvalova

    2013-01-01

    Full Text Available The article discusses the application of the method of interval estimation of conformity of the transaction price the market price. A comparative analysis of interval and point estimate. Identified the positive and negative effects of using interval estimation.

  17. Estimating energy-augmenting technological change in developing country industries

    International Nuclear Information System (INIS)

    Sanstad, Alan H.; Roy, Joyashree; Sathaye, Jayant A.

    2006-01-01

    Assumptions regarding the magnitude and direction of energy-related technological change have long been recognized as critical determinants of the outputs and policy conclusions derived from integrated assessment models. Particularly in the case of developing countries, however, empirical analysis of technological change has lagged behind simulation modeling. This paper presents estimates of sectoral productivity trends and energy-augmenting technological change for several energy-intensive industries in India and South Korea, and, for comparison, the United States. The key findings are substantial heterogeneity among both industries and countries, and a number of cases of declining energy efficiency. The results are subject to certain technical qualifications both in regards to the methodology and to the direct comparison to integrated assessment parameterizations. Nevertheless, they highlight the importance of closer attention to the empirical basis for common modeling assumptions

  18. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  19. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  20. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  1. Estimating haplotype effects for survival data.

    Science.gov (United States)

    Scheike, Thomas H; Martinussen, Torben; Silver, Jeremy D

    2010-09-01

    Genetic association studies often investigate the effect of haplotypes on an outcome of interest. Haplotypes are not observed directly, and this complicates the inclusion of such effects in survival models. We describe a new estimating equations approach for Cox's regression model to assess haplotype effects for survival data. These estimating equations are simple to implement and avoid the use of the EM algorithm, which may be slow in the context of the semiparametric Cox model with incomplete covariate information. These estimating equations also lead to easily computable, direct estimators of standard errors, and thus overcome some of the difficulty in obtaining variance estimators based on the EM algorithm in this setting. We also develop an easily implemented goodness-of-fit procedure for Cox's regression model including haplotype effects. Finally, we apply the procedures presented in this article to investigate possible haplotype effects of the PAF-receptor on cardiovascular events in patients with coronary artery disease, and compare our results to those based on the EM algorithm. © 2009, The International Biometric Society.

  2. Economic assessment of S-prism including development and generating costs

    Energy Technology Data Exchange (ETDEWEB)

    Boardman, Ch.E. [GE Nuclear Energy San Jose (United States)

    2001-07-01

    S-PRISM is an advanced Fast Reactor plant design that utilizes compact modular pool-type reactors sized to enable factory fabrication and an affordable prototype test of a single Nuclear Steam Supply System (NSSS) for design certification at minimum cost and risk. S-PRISM retains all of the key ALMR (advanced liquid metal reactor) design features including passive reactor shutdown, passive shutdown heat removal, and passive reactor cavity cooling that were developed under an earlier DOE program. Key factors that make S-PRISM competitive include: 1) The use of passive safety systems that eliminate the need for diesel generators and hardened active heat sinks to assure that sufficient heat is removed from the core, reactor, and containment systems following design and beyond design basis events. 2) A seven point advantage in the plant capacity factor (93 versus 86%) over a single large plant. 3) A much shorter construction schedule (45%) made possible by a modular design that allows near parallel (sequenced) construction of three relatively small, simple factory fabricated NSSSs instead of one large complex NSSS. This paper describes the approach, methods, and results of an in-depth economic assessment of S-PRISM. The assessment found that the generation cost from an NOAK plant would be less than 3 cents/kW-hr and that a design certification could be obtained in less than 15 years at a cost of 2.1 billion dollars. (authors)

  3. Economic assessment of S-prism including development and generating costs

    International Nuclear Information System (INIS)

    Boardman, Ch.E.

    2001-01-01

    S-PRISM is an advanced Fast Reactor plant design that utilizes compact modular pool-type reactors sized to enable factory fabrication and an affordable prototype test of a single Nuclear Steam Supply System (NSSS) for design certification at minimum cost and risk. S-PRISM retains all of the key ALMR (advanced liquid metal reactor) design features including passive reactor shutdown, passive shutdown heat removal, and passive reactor cavity cooling that were developed under an earlier DOE program. Key factors that make S-PRISM competitive include: 1) The use of passive safety systems that eliminate the need for diesel generators and hardened active heat sinks to assure that sufficient heat is removed from the core, reactor, and containment systems following design and beyond design basis events. 2) A seven point advantage in the plant capacity factor (93 versus 86%) over a single large plant. 3) A much shorter construction schedule (45%) made possible by a modular design that allows near parallel (sequenced) construction of three relatively small, simple factory fabricated NSSSs instead of one large complex NSSS. This paper describes the approach, methods, and results of an in-depth economic assessment of S-PRISM. The assessment found that the generation cost from an NOAK plant would be less than 3 cents/kW-hr and that a design certification could be obtained in less than 15 years at a cost of 2.1 billion dollars. (authors)

  4. An epidemiological modelling study to estimate the composition of HIV-positive populations including migrants from endemic settings.

    Science.gov (United States)

    Nakagawa, Fumiyo

    2017-01-28

    Migrants account for a significant number of people living with HIV in Europe, and it is important to fully consider this population in national estimates. Using a novel approach with the UK as an example, we present key public health measures of the HIV epidemic, taking into account both in-country infections and infections likely to have been acquired abroad. Mathematical model calibrated to extensive data sources. An individual-based stochastic simulation model is used to calibrate to routinely collected surveillance data in the UK. Data on number of new HIV diagnoses, number of deaths, CD4 cell count at diagnosis, as well as time of arrival into the UK for migrants and the annual number of people receiving care were used. An estimated 106 400 (90% plausibility range: 88 700-124 600) people were living with HIV in the UK in 2013. Twenty-three percent of these people, 24 600 (15 000-36 200) were estimated to be undiagnosed; this number has remained stable over the last decade. An estimated 32% of the total undiagnosed population had CD4 cell count less than 350 cells/μl in 2013. Twenty-five and 23% of black African men and women heterosexuals living with HIV were undiagnosed respectively. We have shown a working example to characterize the HIV population in a European context which incorporates migrants from countries with generalized epidemics. Despite all aspects of HIV care being free and widely available to anyone in need in the UK, there is still a substantial number of people who are not yet diagnosed and thus not in care.

  5. Dynamic Analysis of Wind Turbines Including Soil-Structure Interaction

    DEFF Research Database (Denmark)

    Harte, M.; Basu, B.; Nielsen, Søren R.K.

    2012-01-01

    This paper investigates the along-wind forced vibration response of an onshore wind turbine. The study includes the dynamic interaction effects between the foundation and the underlying soil, as softer soils can influence the dynamic response of wind turbines. A Multi-Degree-of-Freedom (MDOF......) horizontal axes onshore wind turbine model is developed for dynamic analysis using an Euler–Lagrangian approach. The model is comprised of a rotor blade system, a nacelle and a flexible tower connected to a foundation system using a substructuring approach. The rotor blade system consists of three rotating...... for displacement of the turbine system are obtained and the modal frequencies of the combined turbine-foundation system are estimated. Simulations are presented for the MDOF turbine structure subjected to wind loading for different soil stiffness conditions. Steady state and turbulent wind loading, developed using...

  6. Developing a methodological framework for estimating water productivity indicators in water scarce regions

    Science.gov (United States)

    Mubako, S. T.; Fullerton, T. M.; Walke, A.; Collins, T.; Mubako, G.; Walker, W. S.

    2014-12-01

    Water productivity is an area of growing interest in assessing the impact of human economic activities on water resources, especially in arid regions. Indicators of water productivity can assist water users in evaluating sectoral water use efficiency, identifying sources of pressure on water resources, and in supporting water allocation rationale under scarcity conditions. This case study for the water-scarce Middle Rio Grande River Basin aims to develop an environmental-economic accounting approach for water use in arid river basins through a methodological framework that relates water use to human economic activities impacting regional water resources. Water uses are coupled to economic transactions, and the complex but mutual relations between various water using sectors estimated. A comparison is made between the calculated water productivity indicators and representative cost/price per unit volume of water for the main water use sectors. Although it contributes very little to regional economic output, preliminary results confirm that Irrigation is among the sectors with the largest direct water use intensities. High economic value and low water use intensity economic sectors in the study region include Manufacturing, Mining, and Steam Electric Power. Water accounting challenges revealed by the study include differences in water management regimes between jurisdictions, and little understanding of the impact of major economic activities on the interaction between surface and groundwater systems in this region. A more comprehensive assessment would require the incorporation of environmental and social sustainability indicators to the calculated water productivity indicators.

  7. Joint U.S./Russian Study on the Development of a Preliminary Cost Estimate of the SAFSTOR Decommissioning Alternative for the Leningrad Nuclear Power Plant Unit #1

    Energy Technology Data Exchange (ETDEWEB)

    SM Garrett

    1998-09-28

    The objectives of the two joint Russian/U.S. Leningrad Nuclear Power Plant (NPP) Unit #1 studies were the development of a safe, technically feasible, economically acceptable decom missioning strategy, and the preliminary cost evaluation of the developed strategy. The first study, resulting in the decommissioning strategy, was performed in 1996 and 1997. The preliminary cost estimation study, described in this report, was performed in 1997 and 1998. The decommissioning strategy study included the analyses of three basic RBM.K decommission- ing alternatives, refined for the Leningrad NPP Unit #1. The analyses included analysis of the requirements for the planning and preparation as well as the decommissioning phases.

  8. THE SUBSTANTIATION OF THE METHODICAL APPROACH FOR ESTIMATION OF DYNAMICS OF DEVELOPMENT OF TECHNOLOGIES OF OFFSHORE WIND ENERGY USING (THE GERMAN EXAMPLE

    Directory of Open Access Journals (Sweden)

    A. A. Gorlov

    2018-01-01

    Full Text Available Purpose: the introduction of renewable energy technologies (RES occurs against the backdrop of a developed hydrocarbon energy market, which raises the risk of seeing unreasonable decisions by investors. The development and use of various analytical tools can reduce such risks. Economic models based on calculations by dozens of experts of a number of macro- and micro-economic factors have been used to study the replacement of traditional energy technologies with already developed RES technologies. At the same time, simpler but more effective econometric methods are being developed, based on the data of real projects and allowing to conduct research for the recently launched RES technologies. The main purpose of this article is to substantiate one of such methodologies used to asses growth dynamics of developing offshore wind energy based on the example of Germany – the leading country in the North Sea basin.Methods: many foreign and domestic authoritative organizations have developed a number of fairly complex models in order to study the economic substitution processes in fuel and energy complexes of different countries, calculate trends and forecasts in this area. Such models take into account findings of dozens of experts focusing on various macro and micro economic parameters and factors, including GDP, growth of employment, welfare, trade and many others. However, econometric methods based on the study of learning curves and calculations of the present value of LCOE electricity according to real energy projects tend to be simpler and effective tool used in order to estimates the recently developed RES technologies for which substantial volumes of data have not yet developed. This article considers substantiation of such methodical and mathematical approaches used to evaluate the dynamics of the development of offshore wind energy technologies using the model "Times model", modified by the author.Results: the feasibility analysis of using

  9. Development and validation of a two-dimensional fast-response flood estimation model

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV OF UTAK

    2009-01-01

    A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.

  10. Development of Estimating Equation of Machine Operational Skill by Utilizing Eye Movement Measurement and Analysis of Stress and Fatigue

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2013-01-01

    Full Text Available For an establishment of a skill evaluation method for human support systems, development of an estimating equation of the machine operational skill is presented. Factors of the eye movement such as frequency, velocity, and moving distance of saccade were computed using the developed eye gaze measurement system, and the eye movement features were determined from these factors. The estimating equation was derived through an outlier test (to eliminate nonstandard data and a principal component analysis (to find dominant components. Using a cooperative carrying task (cc-task simulator, the eye movement and operational data of the machine operators were recorded, and effectiveness of the derived estimating equation was investigated. As a result, it was confirmed that the estimating equation was effective strongly against actual simple skill levels (r=0.56–0.84. In addition, effects of internal condition such as fatigue and stress on the estimating equation were analyzed. Using heart rate (HR and coefficient of variation of R-R interval (Cvrri. Correlation analysis between these biosignal indexes and the estimating equation of operational skill found that the equation reflected effects of stress and fatigue, although the equation could estimate the skill level adequately.

  11. Methodology development for estimating support behavior of spacer grid spring in core

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Kang, Heung Seok; Kim, Hyung Kyu; Song, Kee Nam

    1998-04-01

    The fuel rod (FR) support behavior is changed during operation resulting from effects such as clad creep-down, spring force relaxation due to irradiation, and irradiation growth of spacer straps in accordance with time or increase of burnup. The FR support behavior is closely associated with time or increase of burnup. The FR support behavior is closely associated with FR damage due to fretting, therefore the analysis on the FR support behavior is normally required to minimize the damage. The characteristics of the parameters, which affect the FR support behavior, and the methodology developed for estimating the FR support behavior in the reactor core are described in this work. The FR support condition for the KOFA (KOrean Fuel Assembly) fuel has been analyzed by this method, and the results of the analysis show that the fuel failure due to the fuel rod fretting wear is closely related to the support behavior of FR in the core. Therefore, the present methodology for estimating the FR support condition seems to be useful for estimating the actual FR support condition. In addition, the optimization seems to be a reliable tool for establishing the optimal support condition on the basis of these results. (author). 15 refs., 3 tabs., 26 figs

  12. Comparative estimates of Kamchatka territory development in the context of northern territories of foreign countries

    Directory of Open Access Journals (Sweden)

    Andrey Gennadyevich Shelomentsev

    2014-06-01

    Full Text Available The article promotes an approach to assess the prospects of regional development on the basis of the synthesis of comparative and historical methods of research. According to the authors, the comparative analysis of the similar functioning of the socio-economic systems forms deeper understanding what part factors and methods of state regulation play in regional development, and also their place in socio-economic and geopolitical space. The object of the research is Kamchatka territory as the region playing strategically important role in socio-economic development of Russia and also northern territories of the other countries comparable with Kamchatka on the bass if environmental conditions such as Iceland, Greenland, USA (Alaska, Canada (Yukon, and Japan (Hokkaido. On the basis of allocation of the general signs of regional socio-economic systems and creation of the regional development models forming the basis for comparative estimates, the article analyses the territories, which are comparable on the base of climatic, geographic, economic, geopolitical conditions, but thus significantly different due to the level of economic familiarity. The generalization of the extensive statistical material characterizing various spheres of activity at these territories, including branch structure of the economy, its infrastructure security, demographic situation, the budgetary and financial sphere are given. It allows defining the crucial features of the regional economy development models. In the conclusion, the authors emphasize that ignoring of the essential relations among the regional system elements and internal and external factors deprives a research of historical and socio-economic basis.

  13. Developing Singapore Driving Cycle for passenger cars to estimate fuel consumption and vehicular emissions

    Science.gov (United States)

    Ho, Sze-Hwee; Wong, Yiik-Diew; Chang, Victor Wei-Chung

    2014-11-01

    Singapore has pledged to attain 7-11% Business-As-Usual carbon emissions reduction by 2020. Road transport sector is a significant source of carbon emissions, estimated to be the third largest sector in Singapore. A current gap in environmental evaluation for road transport activities in Singapore is the lack of a representative driving cycle for passenger cars (64% of the total population of 974,170 vehicles). This Singapore Driving Cycle (SDC) is hence developed for Singapore roads and traffic conditions. A chase-car (instrumented vehicle) was used to collect on-road data along 12 designed routes, and circulation driving on highly utilized arterial roads (including those in Central Business District (CBD) and both inner and outer ring roads fringing the CBD area). The SDC was thus hence constructed, with consideration of road type proportions, time periods and desired distance, duration and peak-lull proportion. In essence, the SDC is a 2400-s speed-time profile to represent the driving pattern for passenger car in Singapore. Microscopic estimation model (CMEM) shows that, as compared to SDC, the New European Driving Cycle (NEDC) underestimates most of the vehicular emissions (fuel, CO2, HC and NOx by 5%, 5%, 22% and 47%, respectively) and overestimates CO by 8%. The SDC is thus more suitable than the NEDC that is currently in use in Singapore; the SDC can be used to generate more accurate fuel consumption and emissions ratings for various uses (for example, inventory of vehicular emissions and fuel economy labelling).

  14. Development of estimates of dietary nitrates, nitrites, and nitrosamines for use with the short willet food frequency questionnaire

    OpenAIRE

    Griesenbeck, John S; Steck, Michelle D; Huber, John C; Sharkey, Joseph R; Rene, Antonio A; Brender, Jean D

    2009-01-01

    Abstract Background Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. Methods We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ) as adapted ...

  15. Aircraft parameter estimation ± A tool for development of ...

    Indian Academy of Sciences (India)

    In addition, actuator performance and controller gains may be flight condition dependent. Moreover, this approach may result in open-loop parameter estimates with low accuracy. 6. Aerodynamic databases for high fidelity flight simulators. Estimation of a comprehensive aerodynamic model suitable for a flight simulator is an.

  16. Estimates and sampling schemes for the instrumentation of accountability systems

    International Nuclear Information System (INIS)

    Jewell, W.S.; Kwiatkowski, J.W.

    1976-10-01

    The problem of estimation of a physical quantity from a set of measurements is considered, where the measurements are made on samples with a hierarchical error structure, and where within-groups error variances may vary from group to group at each level of the structure; minimum mean squared-error estimators are developed, and the case where the physical quantity is a random variable with known prior mean and variance is included. Estimators for the error variances are also given, and optimization of experimental design is considered

  17. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  18. Quasi-Newton methods for parameter estimation in functional differential equations

    Science.gov (United States)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  19. Development of technique for estimating primary cooling system break diameter in predicting nuclear emergency event sequence

    International Nuclear Information System (INIS)

    Tatebe, Yasumasa; Yoshida, Yoshitaka

    2012-01-01

    If an emergency event occurs in a nuclear power plant, appropriate action is selected and taken in accordance with the plant status, which changes from time to time, in order to prevent escalation and mitigate the event consequences. It is thus important to predict the event sequence and identify the plant behavior resulting from the action taken. In predicting the event sequence during a loss-of-coolant accident (LOCA), it is necessary to estimate break diameter. The conventional method for this estimation is time-consuming, since it involves multiple sensitivity analyses to determine the break diameter that is consistent with the plant behavior. To speed up the process of predicting the nuclear emergency event sequence, a new break diameter estimation technique that is applicable to pressurized water reactors was developed in this study. This technique enables the estimation of break diameter using the plant data sent from the safety parameter display system (SPDS), with focus on the depressurization rate in the reactor cooling system (RCS) during LOCA. The results of LOCA analysis, performed by varying the break diameter using the MAAP4 and RELAP5/MOD3.2 codes, confirmed that the RCS depressurization rate could be expressed by the log linear function of break diameter, except in the case of a small leak, in which RCS depressurization is affected by the coolant charging system and the high-pressure injection system. A correlation equation for break diameter estimation was developed from this function and tested for accuracy. Testing verified that the correlation equation could estimate break diameter accurately within an error of approximately 16%, even if the leak increases gradually, changing the plant status. (author)

  20. Development of an estimated food record for 9-36-month-old toddlers.

    Science.gov (United States)

    Hilbig, A; Drossard, C; Kersting, M; Alexy, U

    2014-08-01

    Adequacy of dietary intake in the sensitive period of toddler development is a key determinant of health in a short- and long-term perspective. Therefore, studies focussing the nutrition of toddlers are of importance. For this purpose, tailored dietary record methods are an important prerequisite. The objective of this work is to develop a toddler-specific estimated food record (EFR) in a booklet providing photographs of age-specific foods and portion sizes that should be accurate and simple. For a toddler study in Germany, a 7-day consecutive EFR was developed. Data were obtained from a sample of toddlers in Germany. The basis is an evaluation of 3-day weighing food records on food choice and portion size of the DONALD (Dortmund Nutritional and Anthropometric Longitudinally Designed) study for 227 toddlers (118 boys) aged 9-36 months from January 2004 to March 2008. In the analysed food records, a total of 15.147 eating occasions with 24.820 dishes were reported and grouped in 17 food groups. To estimate the portion size, the median consumption amounts of the 194 most frequently consumed dishes were calculated and photographed. Formula and commercial complementary food are collected separately. EFR was structured into seven eating occasions of the day: before breakfast, breakfast, mid-morning, lunch, mid-afternoon, dinner, and before bed. The EFR booklet provides a simple, feasible and validated instrument that can be used to update information on dietary habits during the transition from infant to childhood diet for families in different social classes.

  1. Methodology development for the radioecological monitoring effectiveness estimation

    International Nuclear Information System (INIS)

    Gusev, A.E.; Kozlov, A.A.; Lavrov, K.N.; Sobolev, I.A.; Tsyplyakova, T.P.

    1997-01-01

    A general model for estimation of the programs assuring radiation and ecological public protection is described. The complex of purposes and criteria characterizing and giving an opportunity to estimate the effectiveness of environment protection program composition is selected. An algorithm for selecting the optimal management decision from the view point of work cost connected with population protection improvement is considered. The position of radiation-ecological monitoring in general problem of environment pollution is determined. It is shown that the monitoring organizing effectiveness is closely connected with population radiation and ecological protection

  2. Probabilistic cost estimating of nuclear power plant construction projects

    International Nuclear Information System (INIS)

    Finch, W.C.; Perry, L.W.; Postula, F.D.

    1978-01-01

    This paper shows how to identify and isolate cost accounts by developing probability trees down to component levels as justified by value and cost uncertainty. Examples are given of the procedure for assessing uncertainty in all areas contributing to cost: design, factory equipment pricing, and field labor and materials. The method of combining these individual uncertainties is presented so that the cost risk can be developed for components, systems and the total plant construction project. Formats which enable management to use the probabilistic cost estimate information for business planning and risk control are illustrated. Topics considered include code estimate performance, cost allocation, uncertainty encoding, probabilistic cost distributions, and interpretation. Effective cost control of nuclear power plant construction projects requires insight into areas of greatest cost uncertainty and a knowledge of the factors which can cause costs to vary from the single value estimates. It is concluded that probabilistic cost estimating can provide the necessary assessment of uncertainties both as to the cause and the consequences

  3. Estimating water equivalent snow depth from related meteorological variables

    International Nuclear Information System (INIS)

    Steyaert, L.T.; LeDuc, S.K.; Strommen, N.D.; Nicodemus, M.L.; Guttman, N.B.

    1980-05-01

    Engineering design must take into consideration natural loads and stresses caused by meteorological elements, such as, wind, snow, precipitation and temperature. The purpose of this study was to determine a relationship of water equivalent snow depth measurements to meteorological variables. Several predictor models were evaluated for use in estimating water equivalent values. These models include linear regression, principal component regression, and non-linear regression models. Linear, non-linear and Scandanavian models are used to generate annual water equivalent estimates for approximately 1100 cooperative data stations where predictor variables are available, but which have no water equivalent measurements. These estimates are used to develop probability estimates of snow load for each station. Map analyses for 3 probability levels are presented

  4. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    Science.gov (United States)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  5. Consideration of Including Male Circumcision in the Indonesian HIV Prevention Strategy

    Directory of Open Access Journals (Sweden)

    IN Sutarsa

    2015-04-01

    Full Text Available Introduction HIV/AIDS is an emerging threat to population health. Globally, 33.4 million people were estimated to be living with HIV in 2008 including 2.1 million children.1,2 The total number of new cases was estimated to be 2.7 million people (including 430,000 children and HIV/AIDS related death was estimated to be 2.0 million in 2008.1 Sustainable prevention measures followed by care, support and treatment program is vital to reduce the incidence and prevalence of HIV/AIDS.

  6. Estimation of Faults in DC Electrical Power System

    Science.gov (United States)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  7. Development of a Trip Energy Estimation Model Using Real-World Global Positioning System Driving Data: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Holden, Jacob [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Wood, Eric W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gonder, Jeffrey D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tian, Ye [Metropia, Inc.

    2017-09-15

    A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Future Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.

  8. On Using Exponential Parameter Estimators with an Adaptive Controller

    Science.gov (United States)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  9. Cost estimation tools in Germany and the UK. Comparison of cost estimates and actual costs

    International Nuclear Information System (INIS)

    Pfeifer, W.; Gordelier, S.; Drake, V.

    2005-01-01

    decommission the Joint European Torus (JET) fusion facility at Culham when it has reached the end of its useful life. Total costs for this programme (including final disposal of arising wastes) are estimated at euros 14 B, almost all of which will be funded by the UK taxpayer. UKAEA has developed a systematic approach to estimating the costs of individual projects (PRICE) which it has now been employed for estimates to a total value of euros 1.5 B. Costs from real projects are collected so that the system can be progressively improved by learning from real data. Benchmarking exercises with two other cost estimating systems have also been conducted. Both the German and UK experience show that: 1. Attention to considerable detail in the scope of a decommissioning task is required if a realistic cost estimate is to be produced. 2. Despite attention to such details, decommissioning projects regularly produce the unexpected. Recognition of such changes is essential in collecting data relevant to future cost estimates. Flexibility in project management arrangements is necessary to accommodate these unexpected changes. 3. Accurate cost estimation is still difficult to achieve. There is considerable scope for international collaboration in building up a database of real implementation costs and in applying cost estimation methodologies. (authors)

  10. A level playing field: Obtaining consistent cost estimates for advanced reactor designs

    International Nuclear Information System (INIS)

    Hudson, C.R. II; Rohm, H.H.; Humphreys, J.R. Jr.

    1987-01-01

    Rules and guidelines for developing cost estimates are given which provide a means for presenting cost estimates for advanced concepts on a consistent and equitable basis. For advanced reactor designs, the scope of a cost estimate includes the plant capital cost, the operating and maintenance cost, the fuel cycle cost, and the cost of decommissioning. Each element is subdivided as is necessary to provide a common reporting format for all power plant concepts. The total generation cost is taken to be a suitable choice for a summary figure of merit. To test the application of the rules and guidelines as well as developing reference costs for current technologies, several different sized coal and pressurized water reactor plant cost estimates have been prepared

  11. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  12. Canadian Estimate of Bird Mortality Due to Collisions and Direct Habitat Loss Associated with Wind Turbine Developments

    Directory of Open Access Journals (Sweden)

    J. Ryan. Zimmerling

    2013-12-01

    Full Text Available We estimated impacts on birds from the development and operation of wind turbines in Canada considering both mortality due to collisions and loss of nesting habitat. We estimated collision mortality using data from carcass searches for 43 wind farms, incorporating correction factors for scavenger removal, searcher efficiency, and carcasses that fell beyond the area searched. On average, 8.2 ± 1.4 birds (95% C.I. were killed per turbine per year at these sites, although the numbers at individual wind farms varied from 0 - 26.9 birds per turbine per year. Based on 2955 installed turbines (the number installed in Canada by December 2011, an estimated 23,300 birds (95% C.I. 20,000 - 28,300 would be killed from collisions with turbines each year. We estimated direct habitat loss based on data from 32 wind farms in Canada. On average, total habitat loss per turbine was 1.23 ha, which corresponds to an estimated total habitat loss due to wind farms nationwide of 3635 ha. Based on published estimates of nest density, this could represent habitat for ~5700 nests of all species. Assuming nearby habitats are saturated, and 2 adults displaced per nest site, effects of direct habitat loss are less than that of direct mortality. Installed wind capacity is growing rapidly, and is predicted to increase more than 10-fold over the next 10-15 years, which could lead to direct mortality of approximately 233,000 birds / year, and displacement of 57,000 pairs. Despite concerns about the impacts of biased correction factors on the accuracy of mortality estimates, these values are likely much lower than those from collisions with some other anthropogenic sources such as windows, vehicles, or towers, or habitat loss due to many other forms of development. Species composition data suggest that < 0.2% of the population of any species is currently affected by mortality or displacement from wind turbine development. Therefore, population level impacts are unlikely

  13. Development and validation of GFR-estimating equations using diabetes, transplant and weight

    DEFF Research Database (Denmark)

    Stevens, L.A.; Schmid, C.H.; Zhang, Y.L.

    2009-01-01

    interactions. Equations were developed in a pooled database of 10 studies [2/3 (N = 5504) for development and 1/3 (N = 2750) for internal validation], and final model selection occurred in 16 additional studies [external validation (N = 3896)]. RESULTS: The mean mGFR was 68, 67 and 68 ml/min/ 1.73 m(2......BACKGROUND: We have reported a new equation (CKD-EPI equation) that reduces bias and improves accuracy for GFR estimation compared to the MDRD study equation while using the same four basic predictor variables: creatinine, age, sex and race. Here, we describe the development and validation...... of this equation as well as other equations that incorporate diabetes, transplant and weight as additional predictor variables. METHODS: Linear regression was used to relate log-measured GFR (mGFR) to sex, race, diabetes, transplant, weight, various transformations of creatinine and age with and without...

  14. Estimating natural recharge in San Gorgonio Pass watersheds, California, 1913–2012

    Science.gov (United States)

    Hevesi, Joseph A.; Christensen, Allen H.

    2015-12-21

    A daily precipitation-runoff model was developed to estimate spatially and temporally distributed recharge for groundwater basins in the San Gorgonio Pass area, southern California. The recharge estimates are needed to define transient boundary conditions for a groundwater-flow model being developed to evaluate the effects of pumping and climate on the long-term availability of groundwater. The area defined for estimating recharge is referred to as the San Gorgonio Pass watershed model (SGPWM) and includes three watersheds: San Timoteo Creek, Potrero Creek, and San Gorgonio River. The SGPWM was developed by using the U.S. Geological Survey INFILtration version 3.0 (INFILv3) model code used in previous studies of recharge in the southern California region, including the San Gorgonio Pass area. The SGPWM uses a 150-meter gridded discretization of the area of interest in order to account for spatial variability in climate and watershed characteristics. The high degree of spatial variability in climate and watershed characteristics in the San Gorgonio Pass area is caused, in part, by the high relief and rugged topography of the area.

  15. Estimating Swedish biomass energy supply

    International Nuclear Information System (INIS)

    Johansson, J.; Lundqvist, U.

    1999-01-01

    Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy

  16. Estimates of the financial consequences of nuclear-power-reactor accidents

    International Nuclear Information System (INIS)

    Strip, D.R.

    1982-09-01

    This report develops preliminary techniques for estimating the financial consequences of potential nuclear power reactor accidents. Offsite cost estimates are based on CRAC2 calculations. Costs are assigned to health effects as well as property damage. Onsite costs are estimated for worker health effects, replacement power, and cleanup costs. Several classes of costs are not included, such as indirect costs, socio-economic costs, and health care costs. Present value discounting is explained and then used to calculate the life cycle cost of the risks of potential reactor accidents. Results of the financial consequence estimates for 156 reactor-site combinations are summarized, and detailed estimates are provided in an appendix. The results indicate that, in general, onsite costs dominate the consequences of potential accidents

  17. Blind channel estimation for MLSE receiver in high speed optical communications: theory and ASIC implementation.

    Science.gov (United States)

    Gorshtein, Albert; Levy, Omri; Katz, Gilad; Sadot, Dan

    2013-09-23

    Blind channel estimation is critical for digital signal processing (DSP) compensation of optical fiber communications links. The overall channel consists of deterministic distortions such as chromatic dispersion, as well as random and time varying distortions including polarization mode dispersion and timing jitter. It is critical to obtain robust acquisition and tracking methods for estimating these distortions effects, which, in turn, can be compensated by means of DSP such as Maximum Likelihood Sequence Estimation (MLSE). Here, a novel blind estimation algorithm is developed, accompanied by inclusive mathematical modeling, and followed by extensive set of real time experiments that verify quantitatively its performance and convergence. The developed blind channel estimation is used as the basis of an MLSE receiver. The entire scheme is fully implemented in a 65 nm CMOS Application Specific Integrated Circuit (ASIC). Experimental measurements and results are presented, including Bit Error Rate (BER) measurements, which demonstrate the successful data recovery by the MLSE ASIC under various channel conditions and distances.

  18. [Development of weight-estimation formulae for the bedridden elderly requiring care].

    Science.gov (United States)

    Oonishi, Reiko; Fujii, Kouji; Tsuda, Hiroko; Imai, Katsumi

    2012-01-01

    Bedridden elderly persons requiring care need special body-weight measurement implements, and body-weighting assumes more difficult if they live at their own homes. Therefore, we tried to design a new weight-estimation formulae using various anthropometric variables. The subjects were 33 male and 132 female elderly inpatients certified to be at care level 4 or 5. The body composition included height, body weight, arm circumference, triceps skinfold thickness, subscapular skinfold thickness, calf circumference, and waist circumference. We studied the correlation between the body weight and each anthropometric variable and age. In men, the highest correlation with body weight was shown by waist circumference (r=0.891, pbedridden elderly patients requiring care.

  19. Developing a generalized allometric equation for aboveground biomass estimation

    Science.gov (United States)

    Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.

    2015-12-01

    A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how

  20. Planning and Estimation of Operations Support Requirements

    Science.gov (United States)

    Newhouse, Marilyn E.; Barley, Bryan; Bacskay, Allen; Clardy, Dennon

    2010-01-01

    Life Cycle Cost (LCC) estimates during the proposal and early design phases, as well as project replans during the development phase, are heavily focused on hardware development schedules and costs. Operations (phase E) costs are typically small compared to the spacecraft development and test costs. This, combined with the long lead time for realizing operations costs, can lead to de-emphasizing estimation of operations support requirements during proposal, early design, and replan cost exercises. The Discovery and New Frontiers (D&NF) programs comprise small, cost-capped missions supporting scientific exploration of the solar system. Any LCC growth can directly impact the programs' ability to fund new missions, and even moderate yearly underestimates of the operations costs can present significant LCC impacts for deep space missions with long operational durations. The National Aeronautics and Space Administration (NASA) D&NF Program Office at Marshall Space Flight Center (MSFC) recently studied cost overruns and schedule delays for 5 missions. The goal was to identify the underlying causes for the overruns and delays, and to develop practical mitigations to assist the D&NF projects in identifying potential risks and controlling the associated impacts to proposed mission costs and schedules. The study found that 4 out of the 5 missions studied had significant overruns at or after launch due to underestimation of the complexity and supporting requirements for operations activities; the fifth mission had not launched at the time of the mission. The drivers behind these overruns include overly optimistic assumptions regarding the savings resulting from the use of heritage technology, late development of operations requirements, inadequate planning for sustaining engineering and the special requirements of long duration missions (e.g., knowledge retention and hardware/software refresh), and delayed completion of ground system development work. This paper updates the D

  1. 45 CFR 287.130 - Can NEW Program activities include job market assessments, job creation and economic development...

    Science.gov (United States)

    2010-10-01

    ... assessments, job creation and economic development activities? 287.130 Section 287.130 Public Welfare... creation and economic development activities? (a) A Tribe may conduct job market assessments within its NEW Program. These might include the following: (1) Consultation with the Tribe's economic development staff...

  2. Use of GIS in the estimation and development of risk reduction technology

    International Nuclear Information System (INIS)

    Ha, Jae Joo

    1998-03-01

    The occurrence probability of a severe accident in the nuclear power plant is very small because the safety of a plant and the public is considered in the design and operation of a nuclear power plant. However, if a severe accident occurs, the establishment of a reduction strategy of damages resulting from it is essential because the effect of it on the human and the environment is very large. The important criterion which determines the severity of an accident is risk, which is defined as the product of its frequently and the consequence. The establishment of countermeasures in order to estimate and reduce risks quantitatively can be a very powerful tool to minimize the effect of an accident on the human and the environment. The research on the establishment of a framework which integrates a geographic information system (GIS), a database management system (DBMS), and decision making support system (DMSS) is considered very actively. Based on these systems, we can accomplish the estimation and display of risks and the development of reduction methodologies which are essential parts of an accident management of a nuclear power plant. The GIS plays a role to support users to systematize and comprehend spatial relationships of information which are necessary for the decision making. Through the DBMS, we can establish and manage spatial and attribute data, and use them in the query and selection. The DMSS is a computer-based information system which makes a necessary decision easily. In this study, we reviewed the fundamental concepts of a GIS and examined the methodology for the use of it in the estimation and display of risks. Also, we established the fundamental GIS platform of a Yonggwang site and the necessary database systems for the estimation of risks. (author). 17 refs., 9 tabs., 34 figs

  3. Using Copulas in the Estimation of the Economic Project Value in the Mining Industry, Including Geological Variability

    Science.gov (United States)

    Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal

    2017-12-01

    Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.

  4. Engineer’s estimate reliability and statistical characteristics of bids

    Directory of Open Access Journals (Sweden)

    Fariborz M. Tehrani

    2016-12-01

    Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.

  5. Regression model development and computational procedures to support estimation of real-time concentrations and loads of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-9

    Science.gov (United States)

    Lee, Michael T.; Asquith, William H.; Oden, Timothy D.

    2012-01-01

    In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged

  6. Hydrogen Station Cost Estimates: Comparing Hydrogen Station Cost Calculator Results with other Recent Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Melaina, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Penev, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-09-01

    This report compares hydrogen station cost estimates conveyed by expert stakeholders through the Hydrogen Station Cost Calculation (HSCC) to a select number of other cost estimates. These other cost estimates include projections based upon cost models and costs associated with recently funded stations.

  7. An open source framework for tracking and state estimation ('Stone Soup')

    Science.gov (United States)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  8. Development of predictive models for estimating warfarin maintenance dose based on genetic and clinical factors.

    Science.gov (United States)

    Yang, Lu; Linder, Mark W

    2013-01-01

    In this chapter, we use calculation of estimated warfarin maintenance dosage as an example to illustrate how to develop a multiple linear regression model to quantify the relationship between several independent variables (e.g., patients' genotype information) and a dependent variable (e.g., measureable clinical outcome).

  9. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)

    2007-07-20

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.

  10. The Watts-Strogatz network model developed by including degree distribution: theory and computer simulation

    International Nuclear Information System (INIS)

    Chen, Y W; Zhang, L F; Huang, J P

    2007-01-01

    By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property

  11. Development of calculation method for one-dimensional kinetic analysis in fission reactors, including feedback effects

    International Nuclear Information System (INIS)

    Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.

    1986-01-01

    The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt

  12. Applicability of bioanalysis of multiple analytes in drug discovery and development: review of select case studies including assay development considerations.

    Science.gov (United States)

    Srinivas, Nuggehally R

    2006-05-01

    The development of sound bioanalytical method(s) is of paramount importance during the process of drug discovery and development culminating in a marketing approval. Although the bioanalytical procedure(s) originally developed during the discovery stage may not necessarily be fit to support the drug development scenario, they may be suitably modified and validated, as deemed necessary. Several reviews have appeared over the years describing analytical approaches including various techniques, detection systems, automation tools that are available for an effective separation, enhanced selectivity and sensitivity for quantitation of many analytes. The intention of this review is to cover various key areas where analytical method development becomes necessary during different stages of drug discovery research and development process. The key areas covered in this article with relevant case studies include: (a) simultaneous assay for parent compound and metabolites that are purported to display pharmacological activity; (b) bioanalytical procedures for determination of multiple drugs in combating a disease; (c) analytical measurement of chirality aspects in the pharmacokinetics, metabolism and biotransformation investigations; (d) drug monitoring for therapeutic benefits and/or occupational hazard; (e) analysis of drugs from complex and/or less frequently used matrices; (f) analytical determination during in vitro experiments (metabolism and permeability related) and in situ intestinal perfusion experiments; (g) determination of a major metabolite as a surrogate for the parent molecule; (h) analytical approaches for universal determination of CYP450 probe substrates and metabolites; (i) analytical applicability to prodrug evaluations-simultaneous determination of prodrug, parent and metabolites; (j) quantitative determination of parent compound and/or phase II metabolite(s) via direct or indirect approaches; (k) applicability in analysis of multiple compounds in select

  13. Approximation to estimation of critical state

    International Nuclear Information System (INIS)

    Orso, Jose A.; Rosario, Universidad Nacional

    2011-01-01

    The position of the control rod for the critical state of the nuclear reactor depends on several factors; including, but not limited to the temperature and configuration of the fuel elements inside the core. Therefore, the position can not be known in advance. In this paper theoretical estimations are developed to obtain an equation that allows calculating the position of the control rod for the critical state (approximation to critical) of the nuclear reactor RA-4; and will be used to create a software performing the estimation by entering the count rate of the reactor pulse channel and the length obtained from the control rod (in cm). For the final estimation of the approximation to critical state, a function obtained experimentally indicating control rods reactivity according to the function of their position is used, work is done mathematically to obtain a linear function, which gets the length of the control rod, which has to be removed to get the reactor in critical position. (author) [es

  14. Process monitor design for an extraction column: an application of estimation/detection

    International Nuclear Information System (INIS)

    Candy, J.V.; Emmert, R.A.; Patterson, G.K.

    1979-03-01

    The NRC Safeguards Program at LLL is directed toward developing a methodology for assessing the effectiveness of material control and accounting systems at processing/reprocessing facilities for special nuclear material. The methodology under development requires many types of mathematical models including performance models of safeguard components. Included in the class of safeguard components are real-time measurement systems which incorporate on-line estimators/detectors for the timely detection of material losses. Performance modeling generally involves mathematical model development and simulation of the physical process being measured. This report discusses the development of material estimator designs for a liquid--liquid extraction column using a reprocessing application. These designs are applicable to any processing unit which can be adequately represented by linear or nonlinear models in state space form. Although this work is discussed in the context of a plutonium extraction column, it is representative of two classes of safeguard components which are generic to any fuel cycle involving chemical separations/purifications

  15. Developing a weighting strategy to include mobile phone numbers into an ongoing population health survey using an overlapping dual-frame design with limited benchmark information.

    Science.gov (United States)

    Barr, Margo L; Ferguson, Raymond A; Hughes, Phil J; Steel, David G

    2014-09-04

    In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame

  16. Development of Reference Equations of State for Refrigerant Mixtures Including Hydrocarbons

    Science.gov (United States)

    Miyamoto, Hiroyuki; Watanabe, Koichi

    In recent years, most accurate equations of state for alternative refrigerants and their mixtures can easily be used via convenient software package, e.g., REFPROP. In the present paper, we described the current state-of-the-art equations of state for refrigerant mixtures including hydrocarbons as components. Throughout our discussion, the limitation of the available experimental data and the necessity of the improvement against the arbitrary fitting of recent modeling were confirmed. The enough number of reliable experimental data, especially for properties in the higher pressures and temperatures and for derived properties, should be accumulated in the near future for the development of the physically-sound theoretical background. The present review argued about the possibility of the progress for the future thermodynamic property modeling throughout the detailed discussion regarding the several types of the equations of state as well as the recent innovative measurement technique.

  17. Occupant traffic estimation through structural vibration sensing

    Science.gov (United States)

    Pan, Shijia; Mirshekari, Mostafa; Zhang, Pei; Noh, Hae Young

    2016-04-01

    The number of people passing through different indoor areas is useful in various smart structure applications, including occupancy-based building energy/space management, marketing research, security, etc. Existing approaches to estimate occupant traffic include vision-, sound-, and radio-based (mobile) sensing methods, which have placement limitations (e.g., requirement of line-of-sight, quiet environment, carrying a device all the time). Such limitations make these direct sensing approaches difficult to deploy and maintain. An indirect approach using geophones to measure floor vibration induced by footsteps can be utilized. However, the main challenge lies in distinguishing multiple simultaneous walkers by developing features that can effectively represent the number of mixed signals and characterize the selected features under different traffic conditions. This paper presents a method to monitor multiple persons. Once the vibration signals are obtained, features are extracted to describe the overlapping vibration signals induced by multiple footsteps, which are used for occupancy traffic estimation. In particular, we focus on analysis of the efficiency and limitations of the four selected key features when used for estimating various traffic conditions. We characterize these features with signals collected from controlled impulse load tests as well as from multiple people walking through a real-world sensing area. In our experiments, the system achieves the mean estimation error of +/-0.2 people for different occupant traffic conditions (from one to four) using k-nearest neighbor classifier.

  18. Estimating post-marketing exposure to pharmaceutical products using ex-factory distribution data.

    Science.gov (United States)

    Telfair, Tamara; Mohan, Aparna K; Shahani, Shalini; Klincewicz, Stephen; Atsma, Willem Jan; Thomas, Adrian; Fife, Daniel

    2006-10-01

    The pharmaceutical industry has an obligation to identify adverse reactions to drug products during all phases of drug development, including the post-marketing period. Estimates of population exposure to pharmaceutical products are important to the post-marketing surveillance of drugs, and provide a context for assessing the various risks and benefits, including drug safety, associated with drug treatment. This paper describes a systematic approach to estimating post-marketing drug exposure using ex-factory shipment data to estimate the quantity of medication available, and dosage information (stratified by indication or other factors as appropriate) to convert the quantity of medication to person time of exposure. Unlike the non-standardized methods often used to estimate exposure, this approach provides estimates whose calculations are explicit, documented, and consistent across products and over time. The methods can readily be carried out by an individual or small group specializing in this function, and lend themselves to automation. The present estimation approach is practical and relatively uncomplicated to implement. We believe it is a useful innovation. Copyright 2006 John Wiley & Sons, Ltd.

  19. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  20. Children's estimates of food portion size: the development and evaluation of three portion size assessment tools for use with children.

    Science.gov (United States)

    Foster, E; Matthews, J N S; Lloyd, J; Marshall, L; Mathers, J C; Nelson, M; Barton, K L; Wrieden, W L; Cornelissen, P; Harris, J; Adamson, A J

    2008-01-01

    A number of methods have been developed to assist subjects in providing an estimate of portion size but their application in improving portion size estimation by children has not been investigated systematically. The aim was to develop portion size assessment tools for use with children and to assess the accuracy of children's estimates of portion size using the tools. The tools were food photographs, food models and an interactive portion size assessment system (IPSAS). Children (n 201), aged 4-16 years, were supplied with known quantities of food to eat, in school. Food leftovers were weighed. Children estimated the amount of each food using each tool, 24 h after consuming the food. The age-specific portion sizes represented were based on portion sizes consumed by children in a national survey. Significant differences were found between the accuracy of estimates using the three tools. Children of all ages performed well using the IPSAS and food photographs. The accuracy and precision of estimates made using the food models were poor. For all tools, estimates of the amount of food served were more accurate than estimates of the amount consumed. Issues relating to reporting of foods left over which impact on estimates of the amounts of foods actually consumed require further study. The IPSAS has shown potential for assessment of dietary intake with children. Before practical application in assessment of dietary intake of children the tool would need to be expanded to cover a wider range of foods and to be validated in a 'real-life' situation.

  1. Uncertainty Evaluation of Best Estimate Calculation Results

    International Nuclear Information System (INIS)

    Glaeser, H.

    2006-01-01

    Efforts are underway in Germany to perform analysis using best estimate computer codes and to include uncertainty evaluation in licensing. The German Reactor Safety Commission (RSK) issued a recommendation to perform uncertainty analysis in loss of coolant accident safety analyses (LOCA), recently. A more general requirement is included in a draft revision of the German Nuclear Regulation which is an activity of the German Ministry of Environment and Reactor Safety (BMU). According to the recommendation of the German RSK to perform safety analyses for LOCA in licensing the following deterministic requirements have still to be applied: Most unfavourable single failure, Unavailability due to preventive maintenance, Break location, Break size and break type, Double ended break, 100 percent through 200 percent, Large, medium and small break, Loss of off-site power, Core power (at accident initiation the most unfavourable conditions and values have to be assumed which may occur under normal operation taking into account the set-points of integral power and power density control. Measurement and calibration errors can be considered statistically), Time of fuel cycle. Analysis using best estimate codes with evaluation of uncertainties is the only way to quantify conservatisms with regard to code models and uncertainties of plant, fuel parameters and decay heat. This is especially the case for approaching licensing limits, e.g. due to power up-rates, higher burn-up and higher enrichment. Broader use of best estimate analysis is therefore envisaged in the future. Since some deterministic unfavourable assumptions regarding availability of NPP systems are still used, some conservatism in best-estimate analyses remains. Methods of uncertainty analyses have been developed and applied by the vendor Framatome ANP as well as by GRS in Germany. The GRS development was sponsored by the German Ministry of Economy and Labour (BMWA). (author)

  2. Earth observations for estimating greenhouse gas emissions from deforestation in developing countries

    International Nuclear Information System (INIS)

    DeFries, Ruth; Achard, Frederic; Brown, Sandra; Herold, Martin; Murdiyarso, Daniel; Schlamadinger, Bernhard; Souza, Carlos de

    2007-01-01

    In response to the United Nations Framework Convention on Climate Change (UNFCCC) process investigating the technical issues surrounding the ability to reduce greenhouse gas (GHG) emissions from deforestation in developing countries, this paper reviews technical capabilities for monitoring deforestation and estimating emissions. Implementation of policies to reduce emissions from deforestation require effective deforestation monitoring systems that are reproducible, provide consistent results, meet standards for mapping accuracy, and can be implemented at the national level. Remotely sensed data supported by ground observations are key to effective monitoring. Capacity in developing countries for deforestation monitoring is well-advanced in a few countries and is a feasible goal in most others. Data sources exist to determine base periods in the 1990s as historical reference points. Forest degradation (e.g. from high impact logging and fragmentation) also contribute to greenhouse gas emissions but it is more technically challenging to measure than deforestation. Data on carbon stocks, which are needed to estimate emissions, cannot currently be observed directly over large areas with remote sensing. Guidelines for carbon accounting from deforestation exist and are available in approved Intergovernmental Panel on Climate Change (IPCC) reports and can be applied at national scales in the absence of forest inventory or other data. Key constraints for implementing programs to monitor greenhouse gas emissions from deforestation are international commitment of resources to increase capacity, coordination of observations to ensure pan-tropical coverage, access to free or low-cost data, and standard and consensual protocols for data interpretation and analysis

  3. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    Science.gov (United States)

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2014-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory…

  4. Developing in situ non-destructive estimates of crop biomass to address issues of scale in remote sensing

    Science.gov (United States)

    Marshall, Michael T.; Thenkabail, Prasad S.

    2015-01-01

    Ground-based estimates of aboveground wet (fresh) biomass (AWB) are an important input for crop growth models. In this study, we developed empirical equations of AWB for rice, maize, cotton, and alfalfa, by combining several in situ non-spectral and spectral predictors. The non-spectral predictors included: crop height (H), fraction of absorbed photosynthetically active radiation (FAPAR), leaf area index (LAI), and fraction of vegetation cover (FVC). The spectral predictors included 196 hyperspectral narrowbands (HNBs) from 350 to 2500 nm. The models for rice, maize, cotton, and alfalfa included H and HNBs in the near infrared (NIR); H, FAPAR, and HNBs in the NIR; H and HNBs in the visible and NIR; and FVC and HNBs in the visible; respectively. In each case, the non-spectral predictors were the most important, while the HNBs explained additional and statistically significant predictors, but with lower variance. The final models selected for validation yielded an R2 of 0.84, 0.59, 0.91, and 0.86 for rice, maize, cotton, and alfalfa, which when compared to models using HNBs alone from a previous study using the same spectral data, explained an additional 12%, 29%, 14%, and 6% in AWB variance. These integrated models will be used in an up-coming study to extrapolate AWB over 60 × 60 m transects to evaluate spaceborne multispectral broad bands and hyperspectral narrowbands.

  5. Developing in situ Non-Destructive Estimates of Crop Biomass to Address Issues of Scale in Remote Sensing

    Directory of Open Access Journals (Sweden)

    Michael Marshall

    2015-01-01

    Full Text Available Ground-based estimates of aboveground wet (fresh biomass (AWB are an important input for crop growth models. In this study, we developed empirical equations of AWB for rice, maize, cotton, and alfalfa, by combining several in situ non-spectral and spectral predictors. The non-spectral predictors included: crop height (H, fraction of absorbed photosynthetically active radiation (FAPAR, leaf area index (LAI, and fraction of vegetation cover (FVC. The spectral predictors included 196 hyperspectral narrowbands (HNBs from 350 to 2500 nm. The models for rice, maize, cotton, and alfalfa included H and HNBs in the near infrared (NIR; H, FAPAR, and HNBs in the NIR; H and HNBs in the visible and NIR; and FVC and HNBs in the visible; respectively. In each case, the non-spectral predictors were the most important, while the HNBs explained additional and statistically significant predictors, but with lower variance. The final models selected for validation yielded an R2 of 0.84, 0.59, 0.91, and 0.86 for rice, maize, cotton, and alfalfa, which when compared to models using HNBs alone from a previous study using the same spectral data, explained an additional 12%, 29%, 14%, and 6% in AWB variance. These integrated models will be used in an up-coming study to extrapolate AWB over 60 × 60 m transects to evaluate spaceborne multispectral broad bands and hyperspectral narrowbands.

  6. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  7. Estimating Free and Added Sugar Intakes in New Zealand

    Directory of Open Access Journals (Sweden)

    Rachael Kibblewhite

    2017-11-01

    Full Text Available The reduction of free or added sugar intake (sugars added to food and drinks as a sweetener is almost universally recommended to reduce the risk of obesity-related diseases and dental caries. The World Health Organisation recommends intakes of free sugars of less than 10% of energy intake. However, estimating and monitoring intakes at the population level is challenging because free sugars cannot be analytically distinguished from naturally occurring sugars and most national food composition databases do not include data on free or added sugars. We developed free and added sugar estimates for the New Zealand (NZ food composition database (FOODfiles 2010 by adapting a method developed for Australia. We reanalyzed the 24 h recall dietary data collected for 4721 adults aged 15 years and over participating in the nationally representative 2008/09 New Zealand Adult Nutrition Survey to estimate free and added sugar intakes. The median estimated intake of free and added sugars was 57 and 49 g/day respectively and 42% of adults consumed less than 10% of their energy intake from free sugars. This approach provides more direct estimates of the free and added sugar contents of New Zealand foods than previously available and will enable monitoring of adherence to free sugar intake guidelines in future.

  8. Estimating Free and Added Sugar Intakes in New Zealand.

    Science.gov (United States)

    Kibblewhite, Rachael; Nettleton, Alice; McLean, Rachael; Haszard, Jillian; Fleming, Elizabeth; Kruimer, Devonia; Te Morenga, Lisa

    2017-11-27

    The reduction of free or added sugar intake (sugars added to food and drinks as a sweetener) is almost universally recommended to reduce the risk of obesity-related diseases and dental caries. The World Health Organisation recommends intakes of free sugars of less than 10% of energy intake. However, estimating and monitoring intakes at the population level is challenging because free sugars cannot be analytically distinguished from naturally occurring sugars and most national food composition databases do not include data on free or added sugars. We developed free and added sugar estimates for the New Zealand (NZ) food composition database (FOODfiles 2010) by adapting a method developed for Australia. We reanalyzed the 24 h recall dietary data collected for 4721 adults aged 15 years and over participating in the nationally representative 2008/09 New Zealand Adult Nutrition Survey to estimate free and added sugar intakes. The median estimated intake of free and added sugars was 57 and 49 g/day respectively and 42% of adults consumed less than 10% of their energy intake from free sugars. This approach provides more direct estimates of the free and added sugar contents of New Zealand foods than previously available and will enable monitoring of adherence to free sugar intake guidelines in future.

  9. Estimating Free and Added Sugar Intakes in New Zealand

    Science.gov (United States)

    Kibblewhite, Rachael; Nettleton, Alice; McLean, Rachael; Haszard, Jillian; Fleming, Elizabeth; Kruimer, Devonia

    2017-01-01

    The reduction of free or added sugar intake (sugars added to food and drinks as a sweetener) is almost universally recommended to reduce the risk of obesity-related diseases and dental caries. The World Health Organisation recommends intakes of free sugars of less than 10% of energy intake. However, estimating and monitoring intakes at the population level is challenging because free sugars cannot be analytically distinguished from naturally occurring sugars and most national food composition databases do not include data on free or added sugars. We developed free and added sugar estimates for the New Zealand (NZ) food composition database (FOODfiles 2010) by adapting a method developed for Australia. We reanalyzed the 24 h recall dietary data collected for 4721 adults aged 15 years and over participating in the nationally representative 2008/09 New Zealand Adult Nutrition Survey to estimate free and added sugar intakes. The median estimated intake of free and added sugars was 57 and 49 g/day respectively and 42% of adults consumed less than 10% of their energy intake from free sugars. This approach provides more direct estimates of the free and added sugar contents of New Zealand foods than previously available and will enable monitoring of adherence to free sugar intake guidelines in future. PMID:29186927

  10. Development of a new method for estimating visceral fat area with multi-frequency bioelectrical impedance

    International Nuclear Information System (INIS)

    Nagai, Masato; Komiya, Hideaki; Mori, Yutaka; Ohta, Teruo; Kasahara, Yasuhiro; Ikeda, Yoshio

    2008-01-01

    Excessive visceral fat area (VFA) is a major risk factor in such conditions as cardiovascular disease. In assessing VFA, computed tomography (CT) is adopted as the gold standard; however, this method is cost intensive and involves radiation exposure. In contrast, the bioelectrical impedance (BI) method for estimating body composition is simple and noninvasive and thus its potential application in VFA assessment is being studied. To overcome the difference in obtained impedance due to measurement conditions, we developed a more precise estimation method by selecting the optimum body posture, electrode arrangement, and frequency. The subjects were 73 healthy volunteers, 37 men and 36 women, who underwent CT scans to assess VFA and who were measured for anthropometry parameters, subcutaneous fat layer thickness, abdominal tissue area, and impedance. Impedance was measured by the tetrapolar impedance method using multi-frequency BI. Multiple regression analysis was conducted to estimate VFA. The results revealed a strong correlation between VFA observed by CT and VFA estimated by impedance (r=0.920). The regression equation accurately classified VFA≥100 cm 2 in 13 out of 14 men and 1 of 1 woman. Moreover, it classified VFA≥100 cm 2 or 2 in 3 out of 4 men and 1 of 1 woman misclassified by waist circumference (W) which was adopted as a simple index to evaluate VFA. Therefore, using this simple and convenient method for estimating VFA, we obtained an accurate assessment of VFA using the BI method. (author)

  11. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  12. Estimating the proportion of persons with diabetes developing diabetic retinopathy in India: A systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    A T Jotheeswaran

    2016-01-01

    Full Text Available Background: Available evidence from India shows that the control of diabetes is poor in majority of the population. This escalates the risk of complications. There is no systematic review to estimate the magnitude of diabetic retinopathy (DR in India. Materials and Methods: A systematic literature search was carried out in Ovid Medline and EMBASE databases using Mesh and key search terms. Studies which reported the proportion of people with diabetes with DR in a representative community population were included. Two independent reviewers reviewed all the retrieved publications. Data were extracted using a predefined form. Review Manager software was used to perform meta-analysis to provide a pooled estimate. Studies included were assessed for methodological quality using selected items from the STROBE checklist. Results: Seven studies (1999–2014; n = 8315 persons with diabetes were included in the review. In the meta-analysis, 14.9% (95% confidence interval [CI] 10.7–19.0% of known diabetics aged ≥30 years and 18.1% (95% CI 14.8–21.4 among those aged ≥50 years had DR. Heterogeneity around this estimate ranged from I2= 79–87%. No linear trend was observed between age and the proportion with DR. The overall methodological quality of included studies was moderate. Conclusions: Early detection of DR is currently not prioritized in public health policies for noncommunicable diseases and blindness programs. Methodological issues in studies suggest that the proportion of diabetics with DR is underestimated in the Indian population. Future research should emphasize more robust methodology for assessing diabetes and DR status.

  13. Chernobyl accident: revision of individual thyroid dose estimates for the children included in the cohort of the Belarusian-American study

    International Nuclear Information System (INIS)

    Minenko, Victor; Shemyakina, Elena; Tretyakevich, Sergey

    2000-01-01

    The observed sharp increase in the number of childhood-thyroid-cancer cases in Belarus that has occurred since the Chernobyl accident stimulated the undertaking in 1994 of a long-term American-Belarusian cohort study. That epidemiological study is aimed at revealing the carcinogenic effectiveness of 131 I (the main contributor to the thyroid exposure) and at estimating the risk coefficient of thyroid disease, especially thyroid cancer, as a function of age at exposure. It is planned to follow actively 15,000 children (aged 0-18 at the time of the accident) sampled among about 40,000 who had their thyroid measured in vivo in 1986. Such direct thyroid measurements provide the basis for the initial thyroid dose estimates that have been established for the 40,000 children in the absence of personal interviews. As of August 1999, approximately 5,000 cohort subjects have been screened and interviewed. The initial estimates of individual thyroid dose are being revised for all of the cohort subjects that have been screened. The revision procedure of the 131 I thyroid dose assessment consists of two parts: (1) re-analysis of the direct thyroid measurements and (2) analysis of the responses to the personal interview which were conducted in order to determine the kinetics of the radioiodine intake by the cohort subjects. Revised estimates of thyroid dose resulting from 131 I intake are presented for a sample of 1,000 subjects residing in various areas of Belarus. The reason for the differences between the initial and the revised thyroid dose estimates are discussed. In addition to the estimation of the internal thyroid dose from 131 I, three minor contributors to the thyroid exposure are considered separately: (1) the internal exposure resulting from intake of short-lived radioiodines and radiotelluriums, (2) the internal exposure resulting from intake of other radionuclides (mainly radiocesiums), and (3) external exposure from radionuclides deposited on the ground. Examples

  14. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    DEFF Research Database (Denmark)

    Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas

    2012-01-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...... the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error...... sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this....

  15. Development of a method for estimating oesophageal temperature by multi-locational temperature measurement inside the external auditory canal

    Science.gov (United States)

    Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi

    2017-09-01

    We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.

  16. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  17. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  18. Methods for cost estimation in software project management

    Science.gov (United States)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  19. Consistency of Trend Break Point Estimator with Underspecified Break Number

    Directory of Open Access Journals (Sweden)

    Jingjing Yang

    2017-01-01

    Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

  20. Structural Weight Estimation for Launch Vehicles

    Science.gov (United States)

    Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd

    2002-01-01

    This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.

  1. Estimation of somatic development and mental state of children with neoplasmatic disease after long-term chemotherapy

    International Nuclear Information System (INIS)

    Korzon, M.; Kaminska, B.; Bohdan, Z.; Liberek, A.; Rokosz, K.

    1993-01-01

    33 children with neoplasmatic disease after long-term complex treatment were subjected to a single estimation of somatic development and to psychological examinations. From examinations it appears that long-term chemo- and radiotherapy or surgical treatment in these patients had no negative influence on their physical development. Psychological analysis had shown mind disturbances in children with is connected with this special kind of the disease and stress that children and their parents had undergone. (author)

  2. Supermarket revolution in Asia and emerging development strategies to include small farmers.

    Science.gov (United States)

    Reardon, Thomas; Timmer, C Peter; Minten, Bart

    2012-07-31

    A "supermarket revolution" has occurred in developing countries in the past 2 decades. We focus on three specific issues that reflect the impact of this revolution, particularly in Asia: continuity in transformation, innovation in transformation, and unique development strategies. First, the record shows that the rapid growth observed in the early 2000s in China, Indonesia, Malaysia, and Thailand has continued, and the "newcomers"--India and Vietnam--have grown even faster. Although foreign direct investment has been important, the roles of domestic conglomerates and even state investment have been significant and unique. Second, Asia's supermarket revolution has exhibited unique pathways of retail diffusion and procurement system change. There has been "precocious" penetration of rural towns by rural supermarkets and rural business hubs, emergence of penetration of fresh produce retail that took much longer to initiate in other regions, and emergence of Asian retail developing-country multinational chains. In procurement, a symbiosis between modern retail and the emerging and consolidating modern food processing and logistics sectors has arisen. Third, several approaches are being tried to link small farmers to supermarkets. Some are unique to Asia, for example assembling into a "hub" or "platform" or "park" the various companies and services that link farmers to modern markets. Other approaches relatively new to Asia are found elsewhere, especially in Latin America, including "bringing modern markets to farmers" by establishing collection centers and multipronged collection cum service provision arrangements, and forming market cooperatives and farmer companies to help small farmers access supermarkets.

  3. Development of estimation system of knee extension strength using image features in ultrasound images of rectus femoris

    Science.gov (United States)

    Murakami, Hiroki; Watanabe, Tsuneo; Fukuoka, Daisuke; Terabayashi, Nobuo; Hara, Takeshi; Muramatsu, Chisako; Fujita, Hiroshi

    2016-04-01

    The word "Locomotive syndrome" has been proposed to describe the state of requiring care by musculoskeletal disorders and its high-risk condition. Reduction of the knee extension strength is cited as one of the risk factors, and the accurate measurement of the strength is needed for the evaluation. The measurement of knee extension strength using a dynamometer is one of the most direct and quantitative methods. This study aims to develop a system for measuring the knee extension strength using the ultrasound images of the rectus femoris muscles obtained with non-invasive ultrasonic diagnostic equipment. First, we extract the muscle area from the ultrasound images and determine the image features, such as the thickness of the muscle. We combine these features and physical features, such as the patient's height, and build a regression model of the knee extension strength from training data. We have developed a system for estimating the knee extension strength by applying the regression model to the features obtained from test data. Using the test data of 168 cases, correlation coefficient value between the measured values and estimated values was 0.82. This result suggests that this system can estimate knee extension strength with high accuracy.

  4. The South Wilmington Area remedial cost estimating methodology (RCEM) -- A planning tool and reality check for brownfield development

    International Nuclear Information System (INIS)

    Yancheski, T.B.; Swanson, J.E.

    1996-01-01

    The South Wilmington Area (SWA), which is comprised of 200 acres of multi-use urban lowlands adjacent to the Christina River, is a brownfields area that has been targeted for redevelopment/restoration as part of a major waterfront revitalization project for the City of Wilmington, Delaware. The vision for this riverfront development, which is being promoted by a state-funded development corporation, includes plans for a new harbor, convention and entertainment facilities, upscale residences, an urban wildlife refuge, and the restoration of the Christina River. However, the environmental quality of the SWA has been seriously impacted by an assortment of historic and current heavy industrial land-uses since the late 1800's, and extensive environmental cleanup of this area will be required as part of any redevelopment plan. Given that the environmental cleanup cost will be a major factor in determining the overall economic feasibility of brownfield development in the SWA, a reliable means of estimating potential preliminary remedial costs, without the expense of costly investigative and engineering studies, was needed to assist with this redevelopment initiative. The primary chemicals-of-concern (COCs) area-wide are lead and petroleum compounds, however, there are hot-spot occurrences of polynuclear aromatic hydrocarbons (PAHs), PCBs, and other heavy metals such as arsenic and mercury

  5. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    Science.gov (United States)

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  6. Estimates of emergency operating capacity in US manufacturing and nonmanufacturing industries - Volume 1: Concepts and Methodology

    Energy Technology Data Exchange (ETDEWEB)

    Belzer, D.B. (Pacific Northwest Lab., Richland, WA (USA)); Serot, D.E. (D/E/S Research, Richland, WA (USA)); Kellogg, M.A. (ERCE, Inc., Portland, OR (USA))

    1991-03-01

    Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner to allow evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key nonmanufacturing sectors. This volume lays out the general concepts and methods used to develop the emergency operating estimates. The historical analysis of capacity extends from 1974 through 1986. Some nonmanufacturing industries are included. In addition to mining and utilities, key industries in transportation, communication, and services were analyzed. Physical capacity and efficiency of production were measured. 3 refs., 2 figs., 12 tabs. (JF)

  7. The Practice of Cost Estimation for Decommissioning of Nuclear Facilities

    International Nuclear Information System (INIS)

    Davidova, Ivana; Desecures, Sylvain; Lexow, Thomas; Buonarroti, Stefano; Marini, Giuseppe; Pescatore, Claudio; Rehak, Ivan; Weber, Inge; ); Daniska, Vladimir; Linan, Jorge Borque; Caroll, Simon; Hedberg, Bjoern; De La Gardie, Fredrik; Haenggi, Hannes; Laguardia, Thomas S.; Ridpath, Andy

    2015-01-01

    Decommissioning of both commercial and R and D nuclear facilities is expected to increase significantly in the coming years, and the largest of such industrial decommissioning projects could command considerable budgets. Several approaches are currently being used for decommissioning cost estimations, with an international culture developing in the field. The present cost estimation practice guide was prepared in order to offer international actors specific guidance in preparing quality cost and schedule estimates to support detailed budgeting for the preparation of decommissioning plans, for the securing of funds and for decommissioning implementation. This guide is based on current practices and standards in a number of NEA member countries and aims to help consolidate the practice and process of decommissioning cost estimation so as to make it more widely understood. It offers a useful reference for the practitioner and for training programmes. The remainder of report is divided into the following chapters: - Chapter 2 covers the purpose and nature of decommissioning cost estimates, approaches to cost estimation and the major elements of a cost estimate. - Chapter 3 examines the development of the integrated schedule of the activity-dependent work scope and the determination of the project critical path. - Chapter 4 describes the attributes of a quality assurance programme applicable to cost estimation and the use and cautions of benchmarking the estimate from other estimates or actual costs. - Chapter 5 describes the pyramidal structure of the report, and the scope and content that should be included in the cost study report to ensure consistency and transparency in the estimate underpinnings. - Chapter 6 provides some observations, conclusions and recommendations on the use of this guide

  8. estimation of global solar radiation from sunshine hours for warri

    African Journals Online (AJOL)

    DJFLEX

    Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o. 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.

  9. Estimation of global solar radiation from sunshine hours for Warri ...

    African Journals Online (AJOL)

    Multiple linear regression models were developed to estimate the monthly daily sunshine hours using four parameters during a period of eleven years (1997 to 2007) for Warri, Nigeria (Latitude of 5o 34' 21.0''); the parameters include, Relative Humidity, Maximum and Minimum Temperature, Rainfall and Wind Speed.

  10. Reliability of third molar development for age estimation in Gujarati population: A comparative study.

    Science.gov (United States)

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.

  11. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.

    1992-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  12. Software cost estimation

    NARCIS (Netherlands)

    Heemstra, F.J.; Heemstra, F.J.

    1993-01-01

    The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be

  13. A proposed method to estimate premorbid full scale intelligence quotient (FSIQ) for the Canadian Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) using demographic and combined estimation procedures.

    Science.gov (United States)

    Schoenberg, Mike R; Lange, Rael T; Saklofske, Donald H

    2007-11-01

    Establishing a comparison standard in neuropsychological assessment is crucial to determining change in function. There is no available method to estimate premorbid intellectual functioning for the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV provided normative data for both American and Canadian children aged 6 to 16 years old. This study developed regression algorithms as a proposed method to estimate full-scale intelligence quotient (FSIQ) for the Canadian WISC-IV. Participants were the Canadian WISC-IV standardization sample (n = 1,100). The sample was randomly divided into two groups (development and validation groups). The development group was used to generate regression algorithms; 1 algorithm only included demographics, and 11 combined demographic variables with WISC-IV subtest raw scores. The algorithms accounted for 18% to 70% of the variance in FSIQ (standard error of estimate, SEE = 8.6 to 14.2). Estimated FSIQ significantly correlated with actual FSIQ (r = .30 to .80), and the majority of individual FSIQ estimates were within +/-10 points of actual FSIQ. The demographic-only algorithm was less accurate than algorithms combining demographic variables with subtest raw scores. The current algorithms yielded accurate estimates of current FSIQ for Canadian individuals aged 6-16 years old. The potential application of the algorithms to estimate premorbid FSIQ is reviewed. While promising, clinical validation of the algorithms in a sample of children and/or adolescents with known neurological dysfunction is needed to establish these algorithms as a premorbid estimation procedure.

  14. First Order Estimates of Energy Requirements for Pollution Control. Interagency Energy-Environment Research and Development Program Report.

    Science.gov (United States)

    Barker, James L.; And Others

    This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…

  15. Estimation of morbidity effects

    International Nuclear Information System (INIS)

    Ostro, B.

    1994-01-01

    Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10

  16. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this

  17. Development of a package program for estimating ground level concentrations of radioactive gases

    International Nuclear Information System (INIS)

    Nilkamhang, W.

    1986-01-01

    A package program for estimating ground level concentration of radioactive gas from elevate release was develop for use on IBM P C microcomputer. The main program, GAMMA PLUME NT10, is based on the well known VALLEY MODEL which is a Fortran computer code intended for mainframe computers. Other two options were added, namely, calculation of radioactive gas ground level concentration in Ci/m 3 and dose equivalent rate in mren/hr. In addition, a menu program and editor program were developed to render the program easier to use since the option could be readily selected and the input data could be easily modified as required through the keyboard. The accuracy and reliability of the program is almost identical to the mainframe. Ground level concentration of radioactive radon gas due to ore program processing in the nuclear chemistry laboratory of the Department of Nuclear Technology was estimated. In processing radioactive ore at a rate of 2 kg/day, about 35 p Ci/s of radioactive gas was released from a 14 m stack. When meteorological data of Don Muang (average for 5 years 1978-1982) were used maximum ground level concentration and the dose equivalent rate were found to be 0.00094 p Ci/m 3 and 5.0 x 10 -10 mrem/hr respectively. The processing time required for the above problem was about 7 minutes for any case of source on IBM P C which was acceptable for a computer of this class

  18. Development and Validation of a Calculator for Estimating the Probability of Urinary Tract Infection in Young Febrile Children.

    Science.gov (United States)

    Shaikh, Nader; Hoberman, Alejandro; Hum, Stephanie W; Alberty, Anastasia; Muniz, Gysella; Kurs-Lasky, Marcia; Landsittel, Douglas; Shope, Timothy

    2018-06-01

    Accurately estimating the probability of urinary tract infection (UTI) in febrile preverbal children is necessary to appropriately target testing and treatment. To develop and test a calculator (UTICalc) that can first estimate the probability of UTI based on clinical variables and then update that probability based on laboratory results. Review of electronic medical records of febrile children aged 2 to 23 months who were brought to the emergency department of Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania. An independent training database comprising 1686 patients brought to the emergency department between January 1, 2007, and April 30, 2013, and a validation database of 384 patients were created. Five multivariable logistic regression models for predicting risk of UTI were trained and tested. The clinical model included only clinical variables; the remaining models incorporated laboratory results. Data analysis was performed between June 18, 2013, and January 12, 2018. Documented temperature of 38°C or higher in children aged 2 months to less than 2 years. With the use of culture-confirmed UTI as the main outcome, cutoffs for high and low UTI risk were identified for each model. The resultant models were incorporated into a calculation tool, UTICalc, which was used to evaluate medical records. A total of 2070 children were included in the study. The training database comprised 1686 children, of whom 1216 (72.1%) were female and 1167 (69.2%) white. The validation database comprised 384 children, of whom 291 (75.8%) were female and 200 (52.1%) white. Compared with the American Academy of Pediatrics algorithm, the clinical model in UTICalc reduced testing by 8.1% (95% CI, 4.2%-12.0%) and decreased the number of UTIs that were missed from 3 cases to none. Compared with empirically treating all children with a leukocyte esterase test result of 1+ or higher, the dipstick model in UTICalc would have reduced the number of treatment delays by 10.6% (95% CI

  19. [The informed consent in international clinical trials including developing countries].

    Science.gov (United States)

    Montenegro Surís, Alexander; Monreal Agüero, Magda Elaine

    2008-01-01

    The informed consent procedure has been one of the most important controversies of ethical debates about clinical trials in developing countries. In this essay we present our recommendations about important aspects to consider in the informed consent procedure for clinical trials in developing countries. We performed a full publications review identified by MEDLINE using these terms combinations: informed consent, developing countries, less developed countries and clinical trials. To protect volunteers in less developed countries should be valuated the importance of the community in the informed consent proceeding. The signing and dating of the informed consent form is not always the best procedure to document the informed consent. The informed consent form should be written by local translators. Alternative medias of communications could be needed for communicatios of the information to volunteers. Comparing with developed countries the informed consent proceeding in clinical trials in developing countries frequently require additional efforts. The developing of pragmatic researches is needed to implement informed consent proceedings assuring subjects voluntarily in each developing country. The main aspects to define in each clinical trial for each country are the influence of the community, the effective communication of the information, the documentation of the informed consent and local authority's control.

  20. Well-founded cost estimation validated by experience

    International Nuclear Information System (INIS)

    LaGuardia, T.S.

    2005-01-01

    Full text: Reliable cost estimating is one of the most important elements of decommissioning planning. Alternative technologies may be evaluated and compared based on their efficiency and effectiveness, and measured against a baseline cost as to the feasibility and benefits derived from the technology. When the plan is complete, those cost considerations ensure that it is economically sound and practical for funding. Estimates of decommissioning costs have been performed and published by many organizations for many different applications. The results often vary because of differences in the work scope. Labor force cost, monetary considerations, oversight costs, the specific contaminated materials involved, the waste stream and peripheral costs associated with that type of waste, or applicable environmental compliance requirements. Many of these differences are unavoidable since a reasonable degree of reliability and accuracy can only be achieved by developing decommissioning cost estimates on a case-by-case site-specific basis. This paper describes the estimating methodology and process applied to develop decommissioning cost estimates. A major effort has been made to standardize these methodologies, and to understand the assumptions and bases that drive the costs. However, estimates are only as accurate as the information available from which to derive the costs. This information includes the assumptions of scope of the work, labour cost inputs, inflationary effects, and financial analyses that project these costs to year of expenditure. Attempts at comparison of estimates for two facilities of similar design and size must clearly identify the assumptions used in developing the estimate, and comparison of actual costs versus estimated costs must reflect these same assumptions. For the nuclear industry to grow, decommissioning estimating tools must improve to keep pace with changing technology, regulations and stakeholder issues. The decommissioning industry needs

  1. Generating human reliability estimates using expert judgment. Volume 1. Main report

    International Nuclear Information System (INIS)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report

  2. Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.

    Science.gov (United States)

    Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie

    2018-05-01

    Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  3. Counting the cost: estimating the economic benefit of pedophile treatment programs.

    Science.gov (United States)

    Shanahan, M; Donato, R

    2001-04-01

    The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.

  4. Development of a method for estimating total CH{sub 4} emission from rice paddies in Japan using the DNDC-Rice model

    Energy Technology Data Exchange (ETDEWEB)

    Katayanagi, Nobuko [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Fumoto, Tamon, E-mail: tamon@affrc.go.jp [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Hayano, Michiko [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Kyushu Okinawa Agricultural Research Center, National Agriculture and Food Research Organization, Anno 1742-1, Nishinoomote, Kagoshima 891-3102 (Japan); Takata, Yusuke; Kuwagata, Tsuneo; Shirato, Yasuhito [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan); Sawano, Shinji [Forestry and Forest Products Research Institute (FFPRI), 1 Matsunosato, Tsukuba, Ibaraki 305-8687 (Japan); Kajiura, Masako; Sudo, Shigeto; Ishigooka, Yasushi; Yagi, Kazuyuki [National Institute for Agro-Environmental Sciences, 3-1-3 Kannondai, Tsukuba, Ibaraki 305-8604 (Japan)

    2016-03-15

    Methane (CH{sub 4}) is a greenhouse gas, and paddy fields are one of its main anthropogenic emission sources. To mitigate this emission based on effective management measures, CH{sub 4} emission from paddy fields must be quantified at a national scale. In Japan, country-specific emission factors have been applied since 2003 to estimate national CH{sub 4} emission from paddy fields. However, this method cannot account for the effects of weather conditions and temporal variability of nitrogen fertilizer and organic matter application rates; thus, the estimated emission is highly uncertain. To improve the accuracy of national-scale estimates, we calculated country-specific emission factors using the DeNitrification–DeComposition-Rice (DNDC-Rice) model. First, we calculated CH{sub 4} emission from 1981 to 2010 using 986 datasets that included soil properties, meteorological data, and field management data. Using the simulated site-specific emission, we calculated annual mean emission for each of Japan's seven administrative regions, two water management regimes (continuous flooding and conventional mid-season drainage), and three soil drainage rates (slow, moderate, and fast). The mean emission was positively correlated with organic carbon input to the field, and we developed linear regressions for the relationships among the regions, water management regimes, and drainage rates. The regression results were within the range of published observation values for site-specific relationships between CH{sub 4} emission and organic carbon input rates. This suggests that the regressions provide a simplified method for estimating CH{sub 4} emission from Japanese paddy fields, though some modifications can further improve the estimation accuracy. - Highlights: • DNDC-Rice is a process-based model to simulate rice CH{sub 4} emission from rice paddies. • We simulated annual CH{sub 4} emissions from 986 paddy fields in Japan by DNDC-Rice. • Regional means of CH{sub 4

  5. Survey indicated that core outcome set development is increasingly including patients, being conducted internationally and using Delphi surveys.

    Science.gov (United States)

    Biggane, Alice M; Brading, Lucy; Ravaud, Philippe; Young, Bridget; Williamson, Paula R

    2018-02-17

    There are numerous challenges in including patients in a core outcome set (COS) study, these can vary depending on the patient group. This study describes current efforts to include patients in the development of COS, with the aim of identifying areas for further improvement and study. Using the COMET database, corresponding authors of COS projects registered or published from 1 January 2013 to 2 February 2017 were invited via a personalised email to participate in a short online survey. The survey and emails were constructed to maximise the response rate by following the academic literature on enhancing survey responses. Personalised reminder emails were sent to non-responders. This survey explored the frequency of patient input in COS studies, who was involved, what methods were used and whether or not the COS development was international. One hundred and ninety-two COS developers were sent the survey. Responses were collected from 21 February 2017 until 7 May 2017. One hundred and forty-six unique developers responded, yielding a 76% response rate and data in relation to 195 unique COSs (as some developers had worked on multiple COSs). Of focus here are their responses regarding 162 COSs at the published, completed or ongoing stages of development. Inclusion of patient participants was indicated in 87% (141/162) of COSs in the published completed or ongoing stages and over 94% (65/69) of ongoing COS projects. Nearly half (65/135) of COSs included patient participants from two or more countries and 22% (30/135) included patient participants from five or more countries. The Delphi survey was reported as being used singularly or in combination with other methods in 85% (119/140) of projects. Almost a quarter (16/65) of ongoing studies reported using a combination of qualitative interviews, Delphi survey and consensus meeting. These findings indicated that the Delphi survey is the most popular method of facilitating patient participation, while the combination of

  6. Venture Guidance Appraisal cost estimates for groundwater protection Environmental Impact Statement

    International Nuclear Information System (INIS)

    Moyer, R.A.

    1987-01-01

    Cost estimates were prepared for closure options at criteria waste sites and alternatives for new disposal facilities for hazardous wastes, mixed wastes, low level radioactive wastes and slurry from liquid waste treatment facilities. Because these cost estimates will be used in the Groundwater Protection EIS, the goal was to develop ''enveloping'' costs, i.e., the alternative or option chosen for execution at a later date should cost no more than the estimate. This report summarizes scenarios for making detailed cost estimates. Also included are unit costs for disposition of potential excavations, for operational activities, and for groundwater monitoring and site maintenance after closure of the site. The cost numbers presented are intended for study purposes only and not for budgetary activities

  7. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  8. Including capabilities of local actors in regional economic development: Empirical results of local seaweed industries in Sulawesi

    Directory of Open Access Journals (Sweden)

    Mark T.J. Vredegoor

    2013-11-01

    Full Text Available Stimson, et al. (2009 developed one of the most relevant and well known model for Regional Economic Development. This model covers the most important factors related to economic development question. However, this model excludes the social components of development. Local community should be included in terms of the development of a region. This paper introduced to the Stimson model “Skills” and “Knowledge” at the individual level for local actors indicating the capabilities at the individual level and introduced “Human Coordination” for the capabilities at the collective level. In our empirical research we looked at the Indonesian seaweed market with a specific focus on the region of Baubau. This region was chosen because there are hardly any economic developments. Furthermore this study focuses on the poorer community who are trying to improve their situation by the cultivation of Seaweed. Eighteen local informants was interviewed besides additional interviews of informants from educational and governmental institutions in the cities of Jakarta, Bandung and Yogyakarta. The informants selected had a direct or indirect relationship with the region of Baubau. With the support of the empirical data from this region we can confirm that it is worthwhile to include the local community in the model for regional economic development.  The newly added variables: at the individual level; Skills and Knowledge and at the level of the collective: Human Coordination was supported by the empirical material. It is an indication that including the new variables can give regional economic an extra dimension.  In this way we think that it becomes more explicit that “endogenous” means that the people, or variables closely related to them, should be more explicitly included in models trying to capture Regional Economic Development or rephrased as Local Economic Development Keywords:Regional and endogenous development; Fisheries and seaweed

  9. Lift/cruise fan V/STOL technology aircraft design definition study. Volume 3: Development program and budgetary estimates

    Science.gov (United States)

    Obrien, W. J.

    1976-01-01

    The aircraft development program, budgetary estimates in CY 1976 dollars, and cost reduction program variants are presented. Detailed cost matrices are also provided for the mechanical transmission system, turbotip transmission system, and the thrust vector hoods and yaw doors.

  10. Estimation of full moment tensors, including uncertainties, for earthquakes, volcanic events, and nuclear explosions

    Science.gov (United States)

    Alvizuri, Celso R.

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we show the misfit function in eigenvalue space, represented by a lune. We identify three subsets of the catalog: (1) 6 isotropic events, (2) 5 tensional crack events, and (3) a swarm of 14 events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model. Proper characterization of uncertainties for full moment tensors is critical for distinguishing among physical models of source processes. A seismic moment tensor is a 3x3 symmetric matrix that provides a compact representation of a seismic source. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms for each moment tensor and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M0 for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M0, we first convert the misfit function to a probability function. The uncertainty, or

  11. Advancing development of a limit reference point estimator for sea turtles, and evaluating methods for applying local management to highly migratory species

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — SWFSC is developing tools for estimation of limit reference points for marine turtles. These tools are being applied initially to estimate a limit reference point...

  12. Energy-Efficient Channel Estimation in MIMO Systems

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available The emergence of MIMO communications systems as practical high-data-rate wireless communications systems has created several technical challenges to be met. On the one hand, there is potential for enhancing system performance in terms of capacity and diversity. On the other hand, the presence of multiple transceivers at both ends has created additional cost in terms of hardware and energy consumption. For coherent detection as well as to do optimization such as water filling and beamforming, it is essential that the MIMO channel is known. However, due to the presence of multiple transceivers at both the transmitter and receiver, the channel estimation problem is more complicated and costly compared to a SISO system. Several solutions have been proposed to minimize the computational cost, and hence the energy spent in channel estimation of MIMO systems. We present a novel method of minimizing the overall energy consumption. Unlike existing methods, we consider the energy spent during the channel estimation phase which includes transmission of training symbols, storage of those symbols at the receiver, and also channel estimation at the receiver. We develop a model that is independent of the hardware or software used for channel estimation, and use a divide-and-conquer strategy to minimize the overall energy consumption.

  13. A Web-Based System for Bayesian Benchmark Dose Estimation.

    Science.gov (United States)

    Shao, Kan; Shapiro, Andrew J

    2018-01-11

    Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.

  14. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  15. Multiple Imputation for Estimating the Risk of Developing Dementia and Its Impact on Survival

    OpenAIRE

    Yu, Binbing; Saczynski, Jane S.; Launer, Lenore J.

    2010-01-01

    Dementia, Alzheimer’s disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. I...

  16. Probabilistic estimates of drought impacts on agricultural production

    Science.gov (United States)

    Madadgar, Shahrbanou; AghaKouchak, Amir; Farahmand, Alireza; Davis, Steven J.

    2017-08-01

    Increases in the severity and frequency of drought in a warming climate may negatively impact agricultural production and food security. Unlike previous studies that have estimated agricultural impacts of climate condition using single-crop yield distributions, we develop a multivariate probabilistic model that uses projected climatic conditions (e.g., precipitation amount or soil moisture) throughout a growing season to estimate the probability distribution of crop yields. We demonstrate the model by an analysis of the historical period 1980-2012, including the Millennium Drought in Australia (2001-2009). We find that precipitation and soil moisture deficit in dry growing seasons reduced the average annual yield of the five largest crops in Australia (wheat, broad beans, canola, lupine, and barley) by 25-45% relative to the wet growing seasons. Our model can thus produce region- and crop-specific agricultural sensitivities to climate conditions and variability. Probabilistic estimates of yield may help decision-makers in government and business to quantitatively assess the vulnerability of agriculture to climate variations. We develop a multivariate probabilistic model that uses precipitation to estimate the probability distribution of crop yields. The proposed model shows how the probability distribution of crop yield changes in response to droughts. During Australia's Millennium Drought precipitation and soil moisture deficit reduced the average annual yield of the five largest crops.

  17. Development of a Computer Code for the Estimation of Fuel Rod Failure

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, I.H.; Ahn, H.J. [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    1997-12-31

    Much research has already been performed to obtain the information on the degree of failed fuel rods from the primary coolant activities of operating PWRs in the last few decades. The computer codes that are currently in use for domestic nuclear power plants, such as CADE code and ABB-CE codes developed by Westinghouse and ABB-CE, respectively, still give significant overall errors in estimating the failed fuel rods. In addition, with the CADE code, it is difficult to predict the degree of fuel rod failures during the transient period of nuclear reactor operation, where as the ABB-CE codes are relatively more difficult to use for end-users. In particular, the rapid progresses made recently in the area of the computer hardware and software systems that their computer programs be more versatile and user-friendly. While the MS windows system that is centered on the graphic user interface and multitasking is now in widespread use, the computer codes currently employed at the nuclear power plants, such as CADE and ABB-CE codes, can only be run on the DOS system. Moreover, it is desirable to have a computer code for the fuel rod failure estimation that can directly use the radioactivity data obtained from the on-line monitoring system of the primary coolant activity. The main purpose of this study is, therefore, to develop a Windows computer code that can predict the location, the number of failed fuel rods,and the degree of failures using the radioactivity data obtained from the primary coolant activity for PWRs. Another objective is to combine this computer code with the on-line monitoring system of the primary coolant radioactivity at Kori 3 and 4 operating nuclear power plants and enable their combined use for on-line evaluation of the number and degree of fuel rod failures. (author). 49 refs., 85 figs., 30 tabs.

  18. Development of analytical method used for the estimation of potassium amide in liquid ammonia at HWP (Tuticorin)

    International Nuclear Information System (INIS)

    Ramanathan, A.V.

    2007-01-01

    Potassium amide in liquid ammonia is used as a homogeneous catalyst in mono-thermal ammonia-hydrogen isotopic chemical exchange process employed for the manufacture of heavy water. Estimation of concentration of potassium amide in liquid ammonia is vital for checking whether it is sufficient for catalysis in isotopic exchange towers or for purification in purifiers in the Heavy Water Plants. This estimation was carried out earlier by the conventional method involving evaporation of ammonia, decomposition of potassium amide with water and titration of liberated ammonia with sulphuric acid. This method has been replaced by a newly developed method involving direct titration of potassium amide in ammonia with ammonium bromide. This new method is based on the principle that ammonium bromide and potassium amide act as acid and base respectively in the non-aqueous solvent medium, liquid ammonia. This method has not only proved to be an alternative method of estimation of potassium amide in liquid ammonia but also has been serving as a developed analytical method, because it is faster (with fewer steps), more accurate, safer (as it excludes the use of corrosive sulphuric acid needed for the conventional method) and more convenient (as it doesn't need specially designed apparatus and inert gas like dry nitrogen used in the conventional method). (author)

  19. Developing a model to estimate the potential impact of municipal investment on city health.

    Science.gov (United States)

    Whitfield, Malcolm; Machaczek, Katarzyna; Green, Geoff

    2013-10-01

    This article summarizes a process which exemplifies the potential impact of municipal investment on the burden of cardiovascular disease (CVD) in city populations. We report on Developing an evidence-based approach to city public health planning and investment in Europe (DECiPHEr), a project part funded by the European Union. It had twin objectives: first, to develop and validate a vocational educational training package for policy makers and political decision takers; second, to use this opportunity to iterate a robust and user-friendly investment tool for maximizing the public health impact of 'mainstream' municipal policies, programs and investments. There were seven stages in the development process shared by an academic team from Sheffield Hallam University and partners from four cities drawn from the WHO European Healthy Cities Network. There were five iterations of the model resulting from this process. The initial focus was CVD as the biggest cause of death and disability in Europe. Our original prototype 'cost offset' model was confined to proximal determinants of CVD, utilizing modified 'Framingham' equations to estimate the impact of population level cardiovascular risk factor reduction on future demand for acute hospital admissions. The DECiPHEr iterations first extended the scope of the model to distal determinants and then focused progressively on practical interventions. Six key domains of local influence on population health were introduced into the model by the development process: education, housing, environment, public health, economy and security. Deploying a realist synthesis methodology, the model then connected distal with proximal determinants of CVD. Existing scientific evidence and cities' experiential knowledge were 'plugged-in' or 'triangulated' to elaborate the causal pathways from domain interventions to public health impacts. A key product is an enhanced version of the cost offset model, named Sheffield Health Effectiveness Framework

  20. Estimates of bias and uncertainty in recorded external dose

    International Nuclear Information System (INIS)

    Fix, J.J.; Gilbert, E.S.; Baumgartner, W.V.

    1994-10-01

    A study is underway to develop an approach to quantify bias and uncertainty in recorded dose estimates for workers at the Hanford Site based on personnel dosimeter results. This paper focuses on selected experimental studies conducted to better define response characteristics of Hanford dosimeters. The study is more extensive than the experimental studies presented in this paper and includes detailed consideration and evaluation of other sources of bias and uncertainty. Hanford worker dose estimates are used in epidemiologic studies of nuclear workers. A major objective of these studies is to provide a direct assessment of the carcinogenic risk of exposure to ionizing radiation at low doses and dose rates. Considerations of bias and uncertainty in the recorded dose estimates are important in the conduct of this work. The method developed for use with Hanford workers can be considered an elaboration of the approach used to quantify bias and uncertainty in estimated doses for personnel exposed to radiation as a result of atmospheric testing of nuclear weapons between 1945 and 1962. This approach was first developed by a National Research Council (NRC) committee examining uncertainty in recorded film badge doses during atmospheric tests (NRC 1989). It involved quantifying both bias and uncertainty from three sources (i.e., laboratory, radiological, and environmental) and then combining them to obtain an overall assessment. Sources of uncertainty have been evaluated for each of three specific Hanford dosimetry systems (i.e., the Hanford two-element film dosimeter, 1944-1956; the Hanford multi-element film dosimeter, 1957-1971; and the Hanford multi-element TLD, 1972-1993) used to estimate personnel dose throughout the history of Hanford operations. Laboratory, radiological, and environmental sources of bias and uncertainty have been estimated based on historical documentation and, for angular response, on selected laboratory measurements

  1. Approaches to Refining Estimates of Global Burden and Economics of Dengue

    Science.gov (United States)

    Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.

    2014-01-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools

  2. 15 CFR 2006.1 - Information to be included in petition.

    Science.gov (United States)

    2010-01-01

    ... property right, or foreign direct investment matter for which the rights of the United States under the... nature of any foreign direct investment proposed by the United States person, including estimates of... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Information to be included in petition...

  3. Development of Mathematical Model and Analysis Code for Estimating Drop Behavior of the Control Rod Assembly in the Sodium Cooled Fast Reactor

    International Nuclear Information System (INIS)

    Oh, Se-Hong; Kang, SeungHoon; Choi, Choengryul; Yoon, Kyung Ho; Cheon, Jin Sik

    2016-01-01

    On receiving the scram signal, the control rod assemblies are released to fall into the reactor core by its weight. Thus drop time and falling velocity of the control rod assembly must be estimated for the safety evaluation. There are three typical ways to estimate the drop behavior of the control rod assembly in scram action: Experimental, numerical and theoretical methods. But experimental and numerical(CFD) method require a lot of cost and time. Thus, these methods are difficult to apply to the initial design process. In this study, mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. Mathematical model and theoretical analysis code have been developed in order to estimate drop behavior of the control rod assembly to provide the underlying data for the design optimization. A simplified control rod assembly model is considered to minimize the uncertainty in the development process. And the hydraulic circuit analysis technique is adopted to evaluate the internal/external flow distribution of the control rod assembly. Finally, the theoretical analysis code(named as HEXCON) has been developed based on the mathematical model. To verify the reliability of the developed code, CFD analysis has been conducted. And a calculation using the developed analysis code was carried out under the same condition, and both results were compared

  4. Investigation of circulating temperature fluctuations of the primary coolant in order to develop an enhanced MTC estimator for VVER-440 reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kiss, Sandor; Lipcsei, Sandor [Hungarian Academy of Sciences, Budapest (Hungary). Centre for Energy Research - MTA

    2017-09-15

    Our aim was to develop a method based on noise diagnostics for the estimation of the moderator temperature coefficient of reactivity (MTC) for the Paks VVER-440 units in normal operation. The method requires determining core average neutron flux and temperature fluctuations. The circulation period of the primary coolant, transfer properties of the steam generators, as well as the source and the propagation of the temperature perturbations and the proportions of the perturbation components were investigated in order to estimate the feedback caused by the circulation of the primary coolant. Finally, after developing the new MTC estimator, determining its frequency range and optimal parameters, trends were produced based on an overall evaluation of measurements made with standard instrumentation during a one-year-long period at Paks NPP.

  5. A new system for seismic yield estimation of underground explosions

    International Nuclear Information System (INIS)

    Murphy, J.R.

    1991-01-01

    Research conducted over the past decade has led to the development of a number of innovative procedures for estimating the yields of underground nuclear explosions based on systematic analyses of digital seismic data recorded from these tests. In addition, a wide variety of new data regarding the geophysical environments at Soviet test locations have now become available as a result of the Joint Verification Experiment (JVE) and associated data exchanges. The system described in this paper represents an attempt to integrate all these new capabilities and data into a comprehensive operational prototype which can be used to obtain optimum seismic estimates of explosion yield together with quantitative measures of the uncertainty in those estimates. The implementation of this system has involved a wide variety of technical tasks, including the development of a comprehensive seismic database and related database access software, formulation of a graphical test site information interface for accessing available information on explosion source conditions, design of an interactive seismic analyst station for use in processing the observed data to extract the required magnitude measures and the incorporation of formal statistical analysis modules for use in yield estimation and assessment

  6. Development and application of a mechanistic model to estimate emission of nitrous oxide from UK agriculture

    International Nuclear Information System (INIS)

    Brown, L.; Jarvis, S.C.; Syed, B.; Goulding, K.W.T.; Li, C.

    2002-01-01

    A mechanistic model of N 2 O emission from agricultural soil (DeNitrification-DeComposition - DNDC) was modified for application to the UK, and was used as the basis of an inventory of N 2 O emission from UK agriculture in 1990. UK-specific input data were added to DNDC's database and the ability to simulate daily C and N inputs from grazing animals and applied animal waste was added to the model. The UK version of the model, UK-DNDC, simulated emissions from 18 different crop types on the 3 areally dominant soils in each county. Validation of the model at the field scale showed that predictions matched observations well. Emission factors for the inventory were calculated from estimates of N 2 O emission from UK-DNDC, in order to maintain direct comparability with the IPCC approach. These, along with activity data, were included in a transparent spreadsheet format. Using UK-DNDC, the estimate of N 2 O-N emission from UK current agricultural practice in 1990 was 50.9Gg. This total comprised 31.7Gg from the soil sector, 5.9Gg from animals and 13.2Gg from the indirect sector. The range of this estimate (using the range of soil organic C for each soil used) was 30.5-62.5Gg N. Estimates of emissions in each sector were compared to those calculated using the IPCC default methodology. Emissions from the soil and indirect sectors were smaller with the UK-DNDC approach than with the IPCC methodology, while emissions from the animal sector were larger. The model runs suggested a relatively large emission from agricultural land that was not attributable to current agricultural practices (33.8Gg in total, 27.4Gg from the soil sector). This 'background' component is partly the result of historical agricultural land use. It is not normally included in inventories of emission, but would increase the total emission of N 2 O-N from agricultural land in 1990 to 78.3Gg. (Author)

  7. Development and application of a mechanistic model to estimate emission of nitrous oxide from UK agriculture

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.; Jarvis, S.C. [Institute of Grassland and Environmental Research, Okehampton (United Kingdom); Syed, B. [Cranfield Univ., Silsoe (United Kingdom). Soil Survey and Land Research Centre; Sneath, R.W.; Phillips, V.R. [Silsoe Research Inst. (United Kingdom); Goulding, K.W.T. [Institute of Arable Crops Research, Rothamsted (United Kingdom); Li, C. [University of New Hampshire (United States). Inst. for the Study of Earth, Oceans and Space

    2002-07-01

    A mechanistic model of N{sub 2}O emission from agricultural soil (DeNitrification-DeComposition - DNDC) was modified for application to the UK, and was used as the basis of an inventory of N{sub 2}O emission from UK agriculture in 1990. UK-specific input data were added to DNDC's database and the ability to simulate daily C and N inputs from grazing animals and applied animal waste was added to the model. The UK version of the model, UK-DNDC, simulated emissions from 18 different crop types on the 3 areally dominant soils in each county. Validation of the model at the field scale showed that predictions matched observations well. Emission factors for the inventory were calculated from estimates of N{sub 2}O emission from UK-DNDC, in order to maintain direct comparability with the IPCC approach. These, along with activity data, were included in a transparent spreadsheet format. Using UK-DNDC, the estimate of N{sub 2}O-N emission from UK current agricultural practice in 1990 was 50.9Gg. This total comprised 31.7Gg from the soil sector, 5.9Gg from animals and 13.2Gg from the indirect sector. The range of this estimate (using the range of soil organic C for each soil used) was 30.5-62.5Gg N. Estimates of emissions in each sector were compared to those calculated using the IPCC default methodology. Emissions from the soil and indirect sectors were smaller with the UK-DNDC approach than with the IPCC methodology, while emissions from the animal sector were larger. The model runs suggested a relatively large emission from agricultural land that was not attributable to current agricultural practices (33.8Gg in total, 27.4Gg from the soil sector). This 'background' component is partly the result of historical agricultural land use. It is not normally included in inventories of emission, but would increase the total emission of N{sub 2}O-N from agricultural land in 1990 to 78.3Gg. (Author)

  8. Development and application of a mechanistic model to estimate emission of nitrous oxide from UK agriculture

    Science.gov (United States)

    Brown, L.; Syed, B.; Jarvis, S. C.; Sneath, R. W.; Phillips, V. R.; Goulding, K. W. T.; Li, C.

    A mechanistic model of N 2O emission from agricultural soil (DeNitrification-DeComposition—DNDC) was modified for application to the UK, and was used as the basis of an inventory of N 2O emission from UK agriculture in 1990. UK-specific input data were added to DNDC's database and the ability to simulate daily C and N inputs from grazing animals and applied animal waste was added to the model. The UK version of the model, UK-DNDC, simulated emissions from 18 different crop types on the 3 areally dominant soils in each county. Validation of the model at the field scale showed that predictions matched observations well. Emission factors for the inventory were calculated from estimates of N 2O emission from UK-DNDC, in order to maintain direct comparability with the IPCC approach. These, along with activity data, were included in a transparent spreadsheet format. Using UK-DNDC, the estimate of N 2O-N emission from UK current agricultural practice in 1990 was 50.9 Gg. This total comprised 31.7 Gg from the soil sector, 5.9 Gg from animals and 13.2 Gg from the indirect sector. The range of this estimate (using the range of soil organic C for each soil used) was 30.5-62.5 Gg N. Estimates of emissions in each sector were compared to those calculated using the IPCC default methodology. Emissions from the soil and indirect sectors were smaller with the UK-DNDC approach than with the IPCC methodology, while emissions from the animal sector were larger. The model runs suggested a relatively large emission from agricultural land that was not attributable to current agricultural practices (33.8 Gg in total, 27.4 Gg from the soil sector). This 'background' component is partly the result of historical agricultural land use. It is not normally included in inventories of emission, but would increase the total emission of N 2O-N from agricultural land in 1990 to 78.3 Gg.

  9. Estimating energy impacts of residential and commercial building development. A manual for the Pacific Northwest and Alaska

    Energy Technology Data Exchange (ETDEWEB)

    1979-02-22

    This energy-impact manual presents information on energy implications of new building design and operation, providing a reasonably accurate means of assessing the total energy impact of new construction in the commercial and residential sectors. While developed specifically for the states of Alaska, Idaho, Oregon, and Washington, much of the data used are national averages; the procedures described are applicable to other regions of the nation, with appropriate adjustments for climatic differences. The manual is organized into three parts, each covering one aspect of the energy impacts of building development. Part I addresses the energy impact of erecting the building(s). This includes the energy cost of grading and excavating and other site preparation. It also takes into account the energy embodied in the fabrication of materials used in building construction, as well as the energy cost of transporting materials to the site and assembling them. Part II focuses on the end use of energy during normal building operation, i.e., the energy consumed for space heating, cooling, lighting, water heating, etc. A simplified calculation sequence is provided which allows the user to estimate the consumption of most combinations of building orientation, characteristics, and operating conditions. Part III examines the relationship of land use to energy consumption, principally the transportation energy impact of various land-development patterns, the embodied energy impacts of infrastructure requirements, and the impacts of various orientation and siting schemes. (MCW)

  10. Revised cost savings estimate with uncertainty for enhanced sludge washing of underground storage tank waste

    International Nuclear Information System (INIS)

    DeMuth, S.

    1998-01-01

    Enhanced Sludge Washing (ESW) has been selected to reduce the amount of sludge-based underground storage tank (UST) high-level waste at the Hanford site. During the past several years, studies have been conducted to determine the cost savings derived from the implementation of ESW. The tank waste inventory and ESW performance continues to be revised as characterization and development efforts advance. This study provides a new cost savings estimate based upon the most recent inventory and ESW performance revisions, and includes an estimate of the associated cost uncertainty. Whereas the author's previous cost savings estimates for ESW were compared against no sludge washing, this study assumes the baseline to be simple water washing which more accurately reflects the retrieval activity along. The revised ESW cost savings estimate for all UST waste at Hanford is $6.1 B ± $1.3 B within 95% confidence. This is based upon capital and operating cost savings, but does not include development costs. The development costs are assumed negligible since they should be at least an order of magnitude less than the savings. The overall cost savings uncertainty was derived from process performance uncertainties and baseline remediation cost uncertainties, as determined by the author's engineering judgment

  11. Development of a probabilistic safety assessment framework for an interim dry storage facility subjected to an aircraft crash using best-estimate structural analysis

    International Nuclear Information System (INIS)

    Almomani, Belal; Jang, Dong Chan; Lee, Sang Hoon; Kang, Hyun Gook

    2017-01-01

    Using a probabilistic safety assessment, a risk evaluation framework for an aircraft crash into an interim spent fuel storage facility is presented. Damage evaluation of a detailed generic cask model in a simplified building structure under an aircraft impact is discussed through a numerical structural analysis and an analytical fragility assessment. Sequences of the impact scenario are shown in a developed event tree, with uncertainties considered in the impact analysis and failure probabilities calculated. To evaluate the influence of parameters relevant to design safety, risks are estimated for three specification levels of cask and storage facility structures. The proposed assessment procedure includes the determination of the loading parameters, reference impact scenario, structural response analyses of facility walls, cask containment, and fuel assemblies, and a radiological consequence analysis with dose–risk estimation. The risk results for the proposed scenario in this study are expected to be small relative to those of design basis accidents for best-estimated conservative values. The importance of this framework is seen in its flexibility to evaluate the capability of the facility to withstand an aircraft impact and in its ability to anticipate potential realistic risks; the framework also provides insight into epistemic uncertainty in the available data and into the sensitivity of the design parameters for future research

  12. Development of a Probabilistic Safety Assessment Framework for an Interim Dry Storage Facility Subjected to an Aircraft Crash Using Best-Estimate Structural Analysis

    Directory of Open Access Journals (Sweden)

    Belal Almomani

    2017-03-01

    Full Text Available Using a probabilistic safety assessment, a risk evaluation framework for an aircraft crash into an interim spent fuel storage facility is presented. Damage evaluation of a detailed generic cask model in a simplified building structure under an aircraft impact is discussed through a numerical structural analysis and an analytical fragility assessment. Sequences of the impact scenario are shown in a developed event tree, with uncertainties considered in the impact analysis and failure probabilities calculated. To evaluate the influence of parameters relevant to design safety, risks are estimated for three specification levels of cask and storage facility structures. The proposed assessment procedure includes the determination of the loading parameters, reference impact scenario, structural response analyses of facility walls, cask containment, and fuel assemblies, and a radiological consequence analysis with dose–risk estimation. The risk results for the proposed scenario in this study are expected to be small relative to those of design basis accidents for best-estimated conservative values. The importance of this framework is seen in its flexibility to evaluate the capability of the facility to withstand an aircraft impact and in its ability to anticipate potential realistic risks; the framework also provides insight into epistemic uncertainty in the available data and into the sensitivity of the design parameters for future research.

  13. Development of a probabilistic safety assessment framework for an interim dry storage facility subjected to an aircraft crash using best-estimate structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Almomani, Belal; Jang, Dong Chan [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Lee, Sang Hoon [Dept. of Mechanical and Automotive Engineering, Keimyung University, Daegu (Korea, Republic of); Kang, Hyun Gook [Dept. of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy (United States)

    2017-03-15

    Using a probabilistic safety assessment, a risk evaluation framework for an aircraft crash into an interim spent fuel storage facility is presented. Damage evaluation of a detailed generic cask model in a simplified building structure under an aircraft impact is discussed through a numerical structural analysis and an analytical fragility assessment. Sequences of the impact scenario are shown in a developed event tree, with uncertainties considered in the impact analysis and failure probabilities calculated. To evaluate the influence of parameters relevant to design safety, risks are estimated for three specification levels of cask and storage facility structures. The proposed assessment procedure includes the determination of the loading parameters, reference impact scenario, structural response analyses of facility walls, cask containment, and fuel assemblies, and a radiological consequence analysis with dose–risk estimation. The risk results for the proposed scenario in this study are expected to be small relative to those of design basis accidents for best-estimated conservative values. The importance of this framework is seen in its flexibility to evaluate the capability of the facility to withstand an aircraft impact and in its ability to anticipate potential realistic risks; the framework also provides insight into epistemic uncertainty in the available data and into the sensitivity of the design parameters for future research.

  14. Development of a faulty reactivity detection system applying a digital H∞ estimator

    International Nuclear Information System (INIS)

    Suzuki, Katsuo; Suzudo, Tomoaki; Nabeshima, Kunihiko

    2004-01-01

    This paper concerns an application of digital optimal H ∞ estimator to the detection of faulty reactivity in real-time. The detection system, fundamentally based on the reactivity balance method, is composed of three modules, i.e. the net reactivity estimator, the feedback reactivity estimator and the reactivity balance circuit. H ∞ optimal filters are used for these two reactivity estimators, and the nonlinear neutronics are taken into consideration especially for the design of the net reactivity estimator. A series of performance test of the detection system are conducted by using numerical simulations of reactor dynamics with the insertion of a faulty reactivity for an experimental fast breeder reactor JOYO. The system detects the typical artificial reactivity insertions during a few seconds with no stationary offset and the accuracy of 0.1 cent, and is satisfactory for its practical use. (author)

  15. Novel equations to estimate lean body mass in maintenance hemodialysis patients.

    Science.gov (United States)

    Noori, Nazanin; Kovesdy, Csaba P; Bross, Rachelle; Lee, Martin; Oreopoulos, Antigone; Benner, Deborah; Mehrotra, Rajnish; Kopple, Joel D; Kalantar-Zadeh, Kamyar

    2011-01-01

    Lean body mass (LBM) is an important nutritional measure representing muscle mass and somatic protein in hemodialysis patients, for whom we developed and tested equations to estimate LBM. A study of diagnostic test accuracy. The development cohort included 118 hemodialysis patients with LBM measured using dual-energy x-ray absorptiometry (DEXA) and near-infrared (NIR) interactance. The validation cohort included 612 additional hemodialysis patients with LBM measured using a portable NIR interactance technique during hemodialysis. 3-month averaged serum concentrations of creatinine, albumin, and prealbumin; normalized protein nitrogen appearance; midarm muscle circumference (MAMC); handgrip strength; and subjective global assessment of nutrition. LBM measured using DEXA in the development cohort and NIR interactance in validation cohorts. In the development cohort, DEXA and NIR interactance correlated strongly (r = 0.94, P < 0.001). DEXA-measured LBM correlated with serum creatinine level, MAMC, and handgrip strength, but not with other nutritional markers. Three regression equations to estimate DEXA-measured LBM were developed based on each of these 3 surrogates and sex, height, weight, and age (and urea reduction ratio for the serum creatinine regression). In the validation cohort, the validity of the equations was tested against the NIR interactance-measured LBM. The equation estimates correlated well with NIR interactance-measured LBM (R² ≥ 0.88), although in higher LBM ranges, they tended to underestimate it. Median (95% confidence interval) differences and interquartile range for differences between equation estimates and NIR interactance-measured LBM were 3.4 (-3.2 to 12.0) and 3.0 (1.1-5.1) kg for serum creatinine and 4.0 (-2.6 to 13.6) and 3.7 (1.3-6.0) kg for MAMC, respectively. DEXA measurements were obtained on a nondialysis day, whereas NIR interactance was performed during hemodialysis treatment, with the likelihood of confounding by volume status

  16. Improvement of prediction ability for genomic selection of dairy cattle by including dominance effects.

    Directory of Open Access Journals (Sweden)

    Chuanyu Sun

    Full Text Available Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs. The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both

  17. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  18. Equations for estimating stand establishment, release, and thinning costs in the Lake States.

    Science.gov (United States)

    Jeffrey T. Olson; Allen L. Lundgren; Dietmar Rose

    1978-01-01

    Equations for estimating project costs for certain silvicultural treatments in the Lake States have been developed from project records of public forests. Treatments include machine site preparation, hand planting, aerial spraying, prescribed burning, manual release, and thinning.

  19. Estimation of the base temperature and growth phase duration in terms of thermal time for four grapevine cultivars

    Science.gov (United States)

    Zapata, D.; Salazar, M.; Chaves, B.; Keller, M.; Hoogenboom, G.

    2015-12-01

    Thermal time models have been used to predict the development of many different species, including grapevine ( Vitis vinifera L.). These models normally assume that there is a linear relationship between temperature and plant development. The goal of this study was to estimate the base temperature and duration in terms of thermal time for predicting veraison for four grapevine cultivars. Historical phenological data for four cultivars that were collected in the Pacific Northwest were used to develop the thermal time model. Base temperatures ( T b) of 0 and 10 °C and the best estimated T b using three different methods were evaluated for predicting veraison in grapevine. Thermal time requirements for each individual cultivar were evaluated through analysis of variance, and means were compared using the Fisher's test. The methods that were applied to estimate T b for the development of wine grapes included the least standard deviation in heat units, the regression coefficient, and the development rate method. The estimated T b varied among methods and cultivars. The development rate method provided the lowest T b values for all cultivars. For the three methods, Chardonnay had the lowest T b ranging from 8.7 to 10.7 °C, while the highest T b values were obtained for Riesling and Cabernet Sauvignon with 11.8 and 12.8 °C, respectively. Thermal time also differed among cultivars, when either the fixed or estimated T b was used. Predictions of the beginning of ripening with the estimated temperature resulted in the lowest variation in real days when compared with predictions using T b = 0 or 10 °C, regardless of the method that was used to estimate the T b.

  20. Development and Application of Watershed Regressions for Pesticides (WARP) for Estimating Atrazine Concentration Distributions in Streams

    Science.gov (United States)

    Larson, Steven J.; Crawford, Charles G.; Gilliom, Robert J.

    2004-01-01

    Regression models were developed for predicting atrazine concentration distributions in rivers and streams, using the Watershed Regressions for Pesticides (WARP) methodology. Separate regression equations were derived for each of nine percentiles of the annual distribution of atrazine concentrations and for the annual time-weighted mean atrazine concentration. In addition, seasonal models were developed for two specific periods of the year--the high season, when the highest atrazine concentrations are expected in streams, and the low season, when concentrations are expected to be low or undetectable. Various nationally available watershed parameters were used as explanatory variables, including atrazine use intensity, soil characteristics, hydrologic parameters, climate and weather variables, land use, and agricultural management practices. Concentration data from 112 river and stream stations sampled as part of the U.S. Geological Survey's National Water-Quality Assessment and National Stream Quality Accounting Network Programs were used for computing the concentration percentiles and mean concentrations used as the response variables in regression models. Tobit regression methods, using maximum likelihood estimation, were used for developing the models because some of the concentration values used for the response variables were censored (reported as less than a detection threshold). Data from 26 stations not used for model development were used for model validation. The annual models accounted for 62 to 77 percent of the variability in concentrations among the 112 model development stations. Atrazine use intensity (the amount of atrazine used in the watershed divided by watershed area) was the most important explanatory variable in all models, but additional watershed parameters significantly increased the amount of variability explained by the models. Predicted concentrations from all 10 models were within a factor of 10 of the observed concentrations at most

  1. Development of an Estimating Procedure for the Annual PLAN Process - with Special Emphasis on the Estimating Group

    International Nuclear Information System (INIS)

    Lichtenberg, Steen

    2003-01-01

    This research study deals with the PLAN 2000 procedure. This complex annual estimating procedure is based on the Swedish law on financing, 1992:1537. It requires the Swedish Nuclear Power inspectorate, SKI, to submit to the Government a fully supported annual proposal for the following year's unit fee for nuclear generated electricity to be paid by the owners of the Swedish nuclear power plants. The function of this Fund, KAF, is to finance the future Swedish decommissioning programme. The underlying reason for the study is current criticism of the existing procedure, not least of the composition and working conditions of the analysis group. The purpose of the study is to improve the procedure. The aim is (1) to maximise the realism and neutrality of the necessary estimates in order to allow the KAF Fund to grow steadily at the current rate to the desired target size, allowing it to pay all relevant costs associated with this large decommissioning programme; (2) to do this with a controlled degree of safety; (3) to improve the transparency of the whole procedure in order to avoid any distrust of the procedure and its results. The scope covers all technical and statistical issues; and to some degree also the directly related organisational aspects, notably in respect of the present law and its administration. However, some details are dealt with which seem contrary to the aim of the law. Since 1996, SKI has delegated to the Swedish Nuclear Fuel and Waste Management Co., SKB, the task of performing the basic part of the necessary annual estimating procedure. SKI has then evaluated and supplemented the base estimate before the drafting of the final proposals for the Government and the Board of the Fund, KAFS. Some basic requirements are crucial to the quality of the result of the study: (1) full identification of all potential sources of major uncertainty and the subsequent correct handling of these, (2) balanced and unbiased quantitative evaluation of uncertain

  2. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    Science.gov (United States)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  3. Development of simple kinetic models and parameter estimation for ...

    African Journals Online (AJOL)

    PANCHIGA

    2016-09-28

    Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.

  4. Estimation of local and external contributions of biomass burning to PM2.5 in an industrial zone included in a large urban settlement.

    Science.gov (United States)

    Benetello, Francesca; Squizzato, Stefania; Hofer, Angelika; Masiol, Mauro; Khan, Md Badiuzzaman; Piazzalunga, Andrea; Fermo, Paola; Formenton, Gian Maria; Rampazzo, Giancarlo; Pavoni, Bruno

    2017-01-01

    A total of 85 PM 2.5 samples were collected at a site located in a large industrial zone (Porto Marghera, Venice, Italy) during a 1-year-long sampling campaign. Samples were analyzed to determine water-soluble inorganic ions, elemental and organic carbon, and levoglucosan, and results were processed to investigate the seasonal patterns, the relationship between the analyzed species, and the most probable sources by using a set of tools, including (i) conditional probability function (CPF), (ii) conditional bivariate probability function (CBPF), (iii) concentration weighted trajectory (CWT), and (iv) potential source contribution function (PSCF) analyses. Furthermore, the importance of biomass combustions to PM 2.5 was also estimated. Average PM 2.5 concentrations ranged between 54 and 16 μg m -3 in the cold and warm period, respectively. The mean value of total ions was 11 μg m -3 (range 1-46 μg m -3 ): The most abundant ion was nitrate with a share of 44 % followed by sulfate (29 %), ammonium (14 %), potassium (4 %), and chloride (4 %). Levoglucosan accounted for 1.2 % of the PM 2.5 mass, and its concentration ranged from few ng m -3 in warm periods to 2.66 μg m -3 during winter. Average concentrations of levoglucosan during the cold period were higher than those found in other European urban sites. This result may indicate a great influence of biomass combustions on particulate matter pollution. Elemental and organic carbon (EC, OC) showed similar behavior, with the highest contributions during cold periods and lower during summer. The ratios between biomass burning indicators (K + , Cl - , NO 3 - , SO 4 2- , levoglucosan, EC, and OC) were used as proxy for the biomass burning estimation, and the contribution to the OC and PM 2.5 was also calculated by using the levoglucosan (LG)/OC and LG/PM 2.5 ratios and was estimated to be 29 and 18 %, respectively.

  5. Development, Validation, and Verification of a Self-Assessment Tool to Estimate Agnibala (Digestive Strength).

    Science.gov (United States)

    Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2017-01-01

    According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.

  6. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.; Bruns, D.D.

    1982-01-01

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  7. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    Krichinsky, A.M.

    1981-01-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. NUMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels-including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, NUMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balance for thorium during steady-state process operation

  8. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007–08

    OpenAIRE

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-01-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was es...

  9. A Fresh Start for Flood Estimation in Ungauged Basins

    Science.gov (United States)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for

  10. 24 CFR 3500.7 - Good faith estimate.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Good faith estimate. 3500.7 Section 3500.7 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued... DEVELOPMENT REAL ESTATE SETTLEMENT PROCEDURES ACT § 3500.7 Good faith estimate. (a) Lender to provide. (1...

  11. estimating formwork striking time for concrete mixes estimating

    African Journals Online (AJOL)

    eobe

    In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.

  12. Comparison of density estimators. [Estimation of probability density functions

    Energy Technology Data Exchange (ETDEWEB)

    Kao, S.; Monahan, J.F.

    1977-09-01

    Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)

  13. Estimating patterns in Spartina alterniflora belowground biomass within salt marshes

    Science.gov (United States)

    O'Connell, J. L.; Mishra, D. R.; Alber, M.; Byrd, K. B.

    2017-12-01

    Belowground biomass of marsh plants, such as Spartina alterniflora, help prevent marsh loss because they promote soil accretion, stabilize soils and add organic matter. However, site-wide estimates of belowground biomass are difficult to obtain because root:shoot ratios vary considerably both within species and across sites. We are working to develop a data fusion tool that can predict key characteristics of S. alterniflora, including belowground biomass and plant canopy N, based on satellite imagery. We used field observations from four salt marsh locations along the Georgia Coast, including one that is studied as part of the Georgia Coastal Ecosystems LTER project. From field and remote-sensing data, we developed a hybrid modeling approach to estimate % foliar N (a surrogate for plant assimilated nutrients). Partial Least squares (PLS) regression analysis of Landsat-8 spectral bands could predict variation in foliar N and belowground biomass, suggesting this public data source might be utilized for site-wide assessment of plant biophysical variables in salt marshes. Spectrally estimated foliar N and aboveground biomass were associated with belowground biomass and root:shoot ratio in S. alterniflora. This mirrors results from a previous study from the Sacramento-San Joaquin Delta, CA, on Scheonoplectus acutus, a marsh plant found in some tidal freshwater marshes. Therefore remote sensing may be a useful tool for measuring whole plant productivity among multiple coastal marsh species.

  14. Development and test validation of a computational scheme for high-fidelity fluence estimations of the Swiss BWRs

    International Nuclear Information System (INIS)

    Vasiliev, A.; Wieselquist, W.; Ferroukhi, H.; Canepa, S.; Heldt, J.; Ledergerber, G.

    2011-01-01

    One of the current objectives within reactor analysis related projects at the Paul Scherrer Institut is the establishment of a comprehensive computational methodology for fast neutron fluence (FNF) estimations of reactor pressure vessels (RPV) and internals for both PWRs and BWRs. In the recent past, such an integral calculational methodology based on the CASMO-4/SIMULATE- 3/MCNPX system of codes was developed for PWRs and validated against RPV scraping tests. Based on the very satisfactory validation results, the methodology was recently applied for predictive FNF evaluations of a Swiss PWR to support the national nuclear safety inspectorate in the framework of life-time estimations. Today, focus is at PSI given to develop a corresponding advanced methodology for high-fidelity FNF estimations of BWR reactors. In this paper, the preliminary steps undertaken in that direction are presented. To start, the concepts of the PWR computational scheme and its transfer/adaptation to BWR are outlined. Then, the modelling of a Swiss BWR characterized by very heterogeneous core designs is presented along with preliminary sensitivity studies carried out to assess the sufficient level of details required for the complex core region. Finally, a first validation test case is presented on the basis of two dosimeter monitors irradiated during two recent cycles of the given BWR reactor. The achieved computational results show a satisfactory agreement with measured dosimeter data and illustrate thereby the feasibility of applying the PSI FNF computational scheme also for BWRs. Further sensitivity/optimization studies are nevertheless necessary in order to consolidate the scheme and to ensure increasing continuously, the fidelity and reliability of the BWR FNF estimations. (author)

  15. Development of iodimetric redox method for routine estimation of ascorbic acid from fresh fruit and vegetables

    International Nuclear Information System (INIS)

    Munir, M.; Baloch, A. K.; Khan, W. A.; Ahmad, F.; Jamil, M.

    2013-01-01

    The iodimetric method (Im) is developed for rapid estimation of ascorbic acid from fresh fruit and vegetables. The efficiency of Im was compared with standard with standard dye method (Dm) utilizing a variety of model solutions and aqueous extracts from fresh fruit and vegetables of different colors. The Im presented consistently accurate and precise results from colorless to colored model solutions and from fruit/vegetable extracts with standard deviation (Stdev) in the range of +-0.013 - +-0.405 and +-0.019 - +-0.428 respectively with no significant difference between the replicates. The Dm worked also satisfactorily for colorless model solutions and extracts (Stdev range +-0.235 - +-0.309) while producing unsatisfactory results (+-0.464 - +-3.281) for colored counterparts. Severe discrepancies/ overestimates continued to pileup (52% to 197%) estimating the nutrient from high (3.0 mg/10mL) to low (0.5 mg/10mL) concentration levels, respectively. On the basis of precision and reliability, the Im technique is suggested for adoption in general laboratories for routine estimation of ascorbic acid from fruit and vegetables possessing any shade. (author)

  16. Estimation of iodine nutrition and thyroid function status in late-gestation pregnant women in the United States: Development and application of a population-based pregnancy model

    Energy Technology Data Exchange (ETDEWEB)

    Lumen, A., E-mail: Annie.Lumen@fda.hhs.gov [Division of Biochemical Toxicology, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR 72079 (United States); George, N.I., E-mail: Nysia.George@fda.hhs.gov [Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR 72079 (United States)

    2017-01-01

    Previously, a deterministic biologically-based dose-response (BBDR) pregnancy model was developed to evaluate moderate thyroid axis disturbances with and without thyroid-active chemical exposure in a near-term pregnant woman and fetus. In the current study, the existing BBDR model was adapted to include a wider functional range of iodine nutrition, including more severe iodine deficiency conditions, and to incorporate empirically the effects of homeostatic mechanisms. The extended model was further developed into a population-based model and was constructed using a Monte Carlo-based probabilistic framework. In order to characterize total (T4) and free (fT4) thyroxine levels for a given iodine status at the population-level, the distribution of iodine intake for late-gestation pregnant women in the U.S was reconstructed using various reverse dosimetry methods and available biomonitoring data. The range of median (mean) iodine intake values resulting from three different methods of reverse dosimetry tested was 196.5–219.9 μg of iodine/day (228.2–392.9 μg of iodine/day). There was minimal variation in model-predicted maternal serum T4 and ft4 thyroxine levels from use of the three reconstructed distributions of iodine intake; the range of geometric mean for T4 and fT4, was 138–151.7 nmol/L and 7.9–8.7 pmol/L, respectively. The average value of the ratio of the 97.5th percentile to the 2.5th percentile equaled 3.1 and agreed well with similar estimates from recent observations in third-trimester pregnant women in the U.S. In addition, the reconstructed distributions of iodine intake allowed us to estimate nutrient inadequacy for late-gestation pregnant women in the U.S. via the probability approach. The prevalence of iodine inadequacy for third-trimester pregnant women in the U.S. was estimated to be between 21% and 44%. Taken together, the current work provides an improved tool for evaluating iodine nutritional status and the corresponding thyroid function

  17. Simplifying cardiovascular risk estimation using resting heart rate.

    LENUS (Irish Health Repository)

    Cooney, Marie Therese

    2010-09-01

    Elevated resting heart rate (RHR) is a known, independent cardiovascular (CV) risk factor, but is not included in risk estimation systems, including Systematic COronary Risk Evaluation (SCORE). We aimed to derive risk estimation systems including RHR as an extra variable and assess the value of this addition.

  18. An Accurate FFPA-PSR Estimator Algorithm and Tool for Software Effort Estimation

    Directory of Open Access Journals (Sweden)

    Senthil Kumar Murugesan

    2015-01-01

    Full Text Available Software companies are now keen to provide secure software with respect to accuracy and reliability of their products especially related to the software effort estimation. Therefore, there is a need to develop a hybrid tool which provides all the necessary features. This paper attempts to propose a hybrid estimator algorithm and model which incorporates quality metrics, reliability factor, and the security factor with a fuzzy-based function point analysis. Initially, this method utilizes a fuzzy-based estimate to control the uncertainty in the software size with the help of a triangular fuzzy set at the early development stage. Secondly, the function point analysis is extended by the security and reliability factors in the calculation. Finally, the performance metrics are added with the effort estimation for accuracy. The experimentation is done with different project data sets on the hybrid tool, and the results are compared with the existing models. It shows that the proposed method not only improves the accuracy but also increases the reliability, as well as the security, of the product.

  19. Nitrogen concentration estimation with hyperspectral LiDAR

    Directory of Open Access Journals (Sweden)

    O. Nevalainen

    2013-10-01

    Full Text Available Agricultural lands have strong impact on global carbon dynamics and nitrogen availability. Monitoring changes in agricultural lands require more efficient and accurate methods. The first prototype of a full waveform hyperspectral Light Detection and Ranging (LiDAR instrument has been developed at the Finnish Geodetic Institute (FGI. The instrument efficiently combines the benefits of passive and active remote sensing sensors. It is able to produce 3D point clouds with spectral information included for every point which offers great potential in the field of remote sensing of environment. This study investigates the performance of the hyperspectral LiDAR instrument in nitrogen estimation. The investigation was conducted by finding vegetation indices sensitive to nitrogen concentration using hyperspectral LiDAR data and validating their performance in nitrogen estimation. The nitrogen estimation was performed by calculating 28 published vegetation indices to ten oat samples grown in different fertilization conditions. Reference data was acquired by laboratory nitrogen concentration analysis. The performance of the indices in nitrogen estimation was determined by linear regression and leave-one-out cross-validation. The results indicate that the hyperspectral LiDAR instrument holds a good capability to estimate plant biochemical parameters such as nitrogen concentration. The instrument holds much potential in various environmental applications and provides a significant improvement to the remote sensing of environment.

  20. Consequence estimation for decontaminated sites and facilities

    International Nuclear Information System (INIS)

    Niemczyk, S.J.

    1988-01-01

    To aid the US EPA's selection of decommissioning criteria for unrestricted release of cleaned up sites and facilities, a new approach has been developed for estimating the potential hazard from residual radioactivity. That approach, intended to provide conservatively realistic estimates of radiation doses to individual residents from such radioactivity in the environment and in buildings, uses a comprehensive yet relatively simple set of physically-based risk-level environmental transport and exposure pathway models. Doses are estimated for up to 10,000 years. Radioactive decay and ingrowth are explicitly accounted for. Compared to some other approaches, the new approach has several outstanding features. First, some of its models are less conservative than the comparable models in other approaches. Second, the new approach includes models for estimating certain doses in multi-room buildings. Third, the approach's integrated set of transport and behavior models permits straightforward consideration of situations with significant movement of radioactivity within the environment and/or significant radioactive ingrowth. Fourth, the approach's efficient solution techniques, combined with its comprehensive set of transport and behavior models, make consideration of many situations practical. And fifth, the associated computer code runs on a personal computer. The new approach constitutes a significant first step toward a set of comprehensive relationships for providing dose and health risk estimates for residual radioactivity at a variety of sites and facilities

  1. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    Science.gov (United States)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  2. Estimating end of life liabilities for plant licensing and financial planning for similarly configured stations

    International Nuclear Information System (INIS)

    Griffiths, G.; Kennard, J.

    2008-01-01

    Ontario Power Generation (OPG) is required to update estimated waste management and decommissioning costs on a 5-year cycle within the Canadian Nuclear Safety Commission's (CNSC) regulatory framework and provide a decommissioning cost update and provisions funds status to the Province of Ontario on a similar five-year cycle under the terms of the Ontario Nuclear Funds Agreement (ONFA). The following is an overview of the important steps used to develop OPG's nuclear power station decommissioning cost estimates, including a discussion of the responsibilities of both the owner (OPG) and estimating services vendor (TLG Services). This presentation is related to decommissioning estimating for multi-unit stations; therefore the discussion will be focused on identifying those activities that may be particularly impacted by multi-unit configuration or multiple stations. It should be noted that simultaneously developing decommissioning estimates for multiple multi-unit stations creates the opportunity to achieve economies of scale to more efficiently produce the estimates, and enables the owner and vendor to compare results between stations to identify inconsistencies. However, without careful attention to detail at the planning and execution stage, it also creates the potential liability of backtracking and developing the estimate multiple times, should significant assumptions be revised in mid-project. (authors)

  3. Estimating end of life liabilities for plant licensing and financial planning for similarly configured stations

    Energy Technology Data Exchange (ETDEWEB)

    Griffiths, G. [TLG Services, Inc. an Entergy Nuclear Co. (United States); Kennard, J. [Ontario Power Generation Inc. (Canada)

    2008-07-01

    Ontario Power Generation (OPG) is required to update estimated waste management and decommissioning costs on a 5-year cycle within the Canadian Nuclear Safety Commission's (CNSC) regulatory framework and provide a decommissioning cost update and provisions funds status to the Province of Ontario on a similar five-year cycle under the terms of the Ontario Nuclear Funds Agreement (ONFA). The following is an overview of the important steps used to develop OPG's nuclear power station decommissioning cost estimates, including a discussion of the responsibilities of both the owner (OPG) and estimating services vendor (TLG Services). This presentation is related to decommissioning estimating for multi-unit stations; therefore the discussion will be focused on identifying those activities that may be particularly impacted by multi-unit configuration or multiple stations. It should be noted that simultaneously developing decommissioning estimates for multiple multi-unit stations creates the opportunity to achieve economies of scale to more efficiently produce the estimates, and enables the owner and vendor to compare results between stations to identify inconsistencies. However, without careful attention to detail at the planning and execution stage, it also creates the potential liability of backtracking and developing the estimate multiple times, should significant assumptions be revised in mid-project. (authors)

  4. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  5. Behaviour of radionuclides in meadows including countermeasures application

    International Nuclear Information System (INIS)

    Prister, B.S.; Ivanov, Yu.A.; Perepelyatnikov, G.P.; Il'yn, M.I.; Belli, M.; Sanzharova, N.I.; Fesenko, S.V.; Alexakhin, R.M.; Bunzl, K.; Petriaev, E.P.; Sokolik, G.A.

    1996-01-01

    Main regularities of the behaviour of ChNPP release radionuclides in components of meadow ecosystems are considered. Developed mathematical model of radionuclide migration in components of meadow ecosystems is discussed. Radioecological classification of meadow ecosystems is proposed. Effectiveness of countermeasures application in meadow ecosystems is estimated

  6. Methods for developing useful estimates of the costs associated with birth defects.

    Science.gov (United States)

    Case, Amy P; Canfield, Mark A

    2009-11-01

    Cost estimates for birth defects are useful to policy makers in deciding the best use of resources to prevent these conditions. Much of the effort in this area has focused on spina bifida, in part because cost savings can be estimated from folic acid-preventable cases. However, comprehensive cost-of-illness estimates for this condition may be too outdated, too general, or not applicable to individual states' environments. Using the live birth prevalence for spina bifida in Texas, we applied recent spina bifida cost estimates to approximate total lifetime medical and other costs for an average live birth cohort of spina bifida cases in Texas. In addition, we queried various government programs that provide services for persons with spina bifida to provide program-specific annual costs for this condition. Applying a recently published average lifetime medical cost of $635,000 per case of spina bifida to the average annual birth cohort of 120 Texas cases, an estimated $76 million in direct and indirect medical and other costs will be incurred in Texas over the life span of that cohort. Examples of estimated medical costs for one year are $5 million for infants using actual employer-paid insurance claims data and $6 million combined for children in two public sector programs. Stakeholders and state policy makers may look to state birth defects registries for useful cost data. Although comprehensive state-specific figures are not available, applying prevalence data to existing estimates and obtaining actual claims and program expenditures may help close this information gap.

  7. Estimation of CO2 emission from water treatment plant--model development and application.

    Science.gov (United States)

    Kyung, Daeseung; Kim, Dongwook; Park, Nosuk; Lee, Woojin

    2013-12-15

    A comprehensive mathematical model developed for this study was used to compare estimates of on-site and off-site CO2 emissions, from conventional and advanced water treatment plants (WTPs). When 200,000 m(3) of raw water at 10 NTU (Nepthelometric Turbidity Unit) was treated by a conventional WTP to 0.1 NTU using aluminum sulfate as a coagulant, the total CO2 emissions were estimated to be 790 ± 228 (on-site) and 69,596 ± 3950 (off-site) kg CO2e/d. The emissions from an advanced WTP containing micro-filtration (MF) membrane and ozone disinfection processes; treating the same raw water to 0.005 NTU, were estimated to be 395 ± 115 (on-site) and 38,197 ± 2922 (off-site) kg CO2e/d. The on-site CO2 emissions from the advanced WTP were half that from the conventional WTP due to much lower use of coagulant. On the other hand, off-site CO2 emissions due to consumption of electricity were 2.14 times higher for the advanced WTP, due to the demands for operation of the MF membrane and ozone disinfection processes. However, the lower use of chemicals in the advanced WTP decreased off-site CO2 emissions related to chemical production and transportation. Overall, total CO2 emissions from the conventional WTP were 1.82 times higher than that from the advanced WTP. A sensitivity analysis was performed for the advanced WTP to suggest tactics for simultaneously reducing CO2 emissions further and enhancing water quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Radiation doses to patients in computed tomography including a ready reckoner for dose estimation

    International Nuclear Information System (INIS)

    Szendroe, G.; Axelsson, B.; Leitz, W.

    1995-11-01

    The radiation burden from CT-examinations is still growing in most countries and has reached a considerable part of the total from medical diagnostic x-ray procedures. Efforts for avoiding excess radiation doses are therefore especially well motivated within this field. A survey of CT-examination techniques practised in Sweden showed that standard settings for the exposure variables are used for the vast majority of examinations. Virtually no adjustments to the patient's differences in anatomy have been performed - even for infants and children on average the same settings have been used. The adjustment of the exposure variables to the individual anatomy offers a large potential of dose savings. Amongst the imaging parameters, a change of the radiation dose will primarily influence the noise. As a starting point it is assumed that, irrespective of the patient's anatomy, the same level of noise can be accepted for a certain diagnostic task. To a large extent the noise level is determined by the number of photons that are registered in the detector. Hence, for different patient size and anatomy, the exposure should be adjusted so that the same transmitted photon fluence is achieved. An appendix with a ready reckoner for dose estimation for CT-scanners used in Sweden is attached. 7 refs, 5 figs, 8 tabs

  9. Campbell-Bristow development Model for Estimating Global Solar radiation in the Region of Junin, Perú

    Directory of Open Access Journals (Sweden)

    Dr. Becquer Frauberth Camayo-Lapa

    2015-11-01

    Full Text Available In order to have a tool to estimate the monthly and annual solar radiation on the horizontal surface in Junín region, in which is not available with this information, adapted Bristow-Campbell (1984 model for estimating global solar radiation monthly average.   To develop the model of Bristow-Campbell that estimates the average daily global solar radiation monthly modeling technique proposed by Espinoza (2010, were recorded daily maximum and minimum temperatures of 19 weather stations and the equations proposed  by the Solar High Peru 2003 was adapted to this model.  The Bristow-Campbell model was developed with data recorded in stations: Santa Ana, Tarma and Satipo belonging to Sierra and Selva, respectively. The performance of applications calculated solar radiation was determined by considering the OLADE (1992 that solar radiation over 4,0 kWh/m2/day are profitable and 5,0 kWh/m2/day very profitable. The results indicate that the monthly average global solar radiation in Junín  region is 5,3  kWh/m2/day corresponding to the  4,2 Forest and the Sierra 5,6 kWh/m2/day kWh/m2/day. Profitability is determined for the less profitable Selva and Sierra is very profitable. In addition, the operating model is simple and available to all users. We conclude that application of the Bristow-Campbell model adapted, it is an instrument of great utility to generate a comprehensive database of available solar radiation in Junín region.

  10. Cost estimation for decommissioning: a review of current practice

    International Nuclear Information System (INIS)

    O'Sullivan, P.; Pescatore, C.

    2009-01-01

    It is now common practice for decommissioning plans and associated cost estimates to be prepared for all nuclear installations. Specific requirements are generally set out in regulations that have their basis in national legislation. These estimates are important for ensuring that the necessary funds are being collected to cover the actual costs of decommissioning the facility. The long time horizon for both amassing and disbursing these funds is a particular concern for national authorities. It is thus important to maintain a realistic estimate of the liabilities involved and to confirm the adequacy of the provisions to discharge them over time. Estimates of decommissioning costs have been performed and published by many organisations for many different purposes and applications. The results often vary because of differences in basic assumptions such as the choice of the decommissioning strategy (immediate vs. deferred), the availability of waste management pathways, the assumed end states of installations, the detailed definition of cost items, technical uncertainties, unforeseen events, the evolution of regulation and requirements. Many of these differences may be unavoidable since a reasonable degree of reliability and accuracy can only be achieved by developing decommissioning cost estimates on a case-by-case, site-specific basis. Moreover, even if considerable efforts are made to obtain reliable estimates, unforeseen events may cause estimates to go wrong. The issue of how to deal with uncertainties is therefore an important one, leading in turn to the need for risk management in terms of making adequate funding provisions. In March 2008, a questionnaire was circulated among the organisations participating in the NEA Decommissioning and Cost Estimation Group (DCEG). Information was collected on legal requirements and the responsibilities of the main parties concerned with the preparation and oversight of cost estimates, the main cost elements and associated

  11. Two Approaches to Estimating the Effect of Parenting on the Development of Executive Function in Early Childhood

    Science.gov (United States)

    Blair, Clancy; Raver, C. Cybele; Berry, Daniel J.

    2015-01-01

    In the current article, we contrast 2 analytical approaches to estimate the relation of parenting to executive function development in a sample of 1,292 children assessed longitudinally between the ages of 36 and 60 months of age. Children were administered a newly developed and validated battery of 6 executive function tasks tapping inhibitory control, working memory, and attention shifting. Residualized change analysis indicated that higher quality parenting as indicated by higher scores on widely used measures of parenting at both earlier and later time points predicted more positive gain in executive function at 60 months. Latent change score models in which parenting and executive function over time were held to standards of longitudinal measurement invariance provided additional evidence of the association between change in parenting quality and change in executive function. In these models, cross-lagged paths indicated that in addition to parenting predicting change in executive function, executive function bidirectionally predicted change in parenting quality. Results were robust with the addition of covariates, including child sex, race, maternal education, and household income-to-need. Strengths and drawbacks of the 2 analytic approaches are discussed, and the findings are considered in light of emerging methodological innovations for testing the extent to which executive function is malleable and open to the influence of experience. PMID:23834294

  12. Informing climate policy given incommensurable benefits estimates

    International Nuclear Information System (INIS)

    Jacoby, H.D.

    2003-01-01

    framework for assessing climate benefits, and not a particular estimation method. The objective might have been the development of a single estimation procedure, perhaps one that came as close as possible to a measure directly comparable to cost estimates, with all benefits converted to a common monetary unit. Unfortunately, the complexities of the climate issue combine to rule against the formulation of a single, widely accepted measure of this type. Inevitably, governments will be confronted with sets of benefits estimates that are incommensurable, i.e. they will share no common basis for comparison. To deal with this difficulty, it is recommended here that the OECD support the development of a portfolio of benefits measures, structured to provide transparency when viewing alternative estimates. The development of such a portfolio is a research task, and an effort is made below to outline the work needed. To limit the scope of this discussion several important issues are laid aside. Most important, the benefits considered here are limited to the damage caused by climate change (net of any positive effects) that could be prevented by emissions mitigation. The accounting for adaptation costs, which arises mainly in the context of monetary estimates, are not treated in detail, as they are dealt with in other parts of this OECD project. It is simply assumed that estimates of climate damage (or the benefits of avoiding it) include the effects and costs of economic adaptation. Secondary or ancillary benefits of mitigation actions also are not considered. This last omission is an important one, for many of the issues raised about (net) climate damage apply as well to ancillary benefits and costs. And, although distributional issues will emerge, the discussion does not pretend to cover the range of concerns of developing countries or of sustainable development more broadly. Again, these issues are important, but they as well only add more dimensions to the problem of

  13. Development of a Probabilistic Technique for On-line Parameter and State Estimation in Non-linear Dynamic Systems

    International Nuclear Information System (INIS)

    Tunc Aldemir; Miller, Don W.; Hajek, Brian K.; Peng Wang

    2002-01-01

    The DSD (Dynamic System Doctor) is a system-independent, interactive software under development for on-line state/parameter estimation in dynamic systems (1), partially supported through a Nuclear Engineering Education (NEER) grant during 1998-2001. This paper summarizes the recent accomplishments in improving the user-friendliness and computational capability of DSD

  14. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  15. Detector development and background estimation for the observation of Coherent Neutrino Nucleus Scattering (CNNS)

    Energy Technology Data Exchange (ETDEWEB)

    Guetlein, Achim; Ciemniak, Christian; Feilitzsch, Franz von; Lanfranchi, Jean-Come; Oberauer, Lothar; Potzel, Walter; Roth, Sabine; Schoenert, Stefan; Sivers, Moritz von; Strauss, Raimund; Wawoczny, Stefan; Willers, Michael; Zoeller, Andreas [Technische Universitaet Muenchen, Physik-Department, E15 (Germany)

    2012-07-01

    The Coherent Neutrino Nucleus Scattering (CNNS) is a neutral current process of the weak interaction and is thus flavor independent. A low-energetic neutrino scatters off a target nucleus. For low transferred momenta the wavelength of the transferred Z{sup 0} boson is comparable to the diameter of the target nucleus. Thus, the neutrino interacts with all nucleons coherently and the cross section for the CNNS is enhanced. To observe CNNS for the first time we are developing cryogenic detectors with a target mass of about 10 g each and an energy threshold of less than 0.5 keV. The current status of this development is presented as well as the estimated background for an experiment in the vicinity of a nuclear power reactor as a strong neutrino source.

  16. Estimating the non-monetary burden of neurocysticercosis in Mexico.

    Directory of Open Access Journals (Sweden)

    Rachana Bhattarai

    Full Text Available BACKGROUND: Neurocysticercosis (NCC is a major public health problem in many developing countries where health education, sanitation, and meat inspection infrastructure are insufficient. The condition occurs when humans ingest eggs of the pork tapeworm Taenia solium, which then develop into larvae in the central nervous system. Although NCC is endemic in many areas of the world and is associated with considerable socio-economic losses, the burden of NCC remains largely unknown. This study provides the first estimate of disability adjusted life years (DALYs associated with NCC in Mexico. METHODS: DALYs lost for symptomatic cases of NCC in Mexico were estimated by incorporating morbidity and mortality due to NCC-associated epilepsy, and morbidity due to NCC-associated severe chronic headaches. Latin hypercube sampling methods were employed to sample the distributions of uncertain parameters and to estimate 95% credible regions (95% CRs. FINDINGS: In Mexico, 144,433 and 98,520 individuals are estimated to suffer from NCC-associated epilepsy and NCC-associated severe chronic headaches, respectively. A total of 25,341 (95% CR: 12,569-46,640 DALYs were estimated to be lost due to these clinical manifestations, with 0.25 (95% CR: 0.12-0.46 DALY lost per 1,000 person-years of which 90% was due to NCC-associated epilepsy. CONCLUSION: This is the first estimate of DALYs associated with NCC in Mexico. However, this value is likely to be underestimated since only the clinical manifestations of epilepsy and severe chronic headaches were included. In addition, due to limited country specific data, some parameters used in the analysis were based on systematic reviews of the literature or primary research from other geographic locations. Even with these limitations, our estimates suggest that healthy years of life are being lost due to NCC in Mexico.

  17. Development of estimates of dietary nitrates, nitrites, and nitrosamines for use with the Short Willet Food Frequency Questionnaire.

    Science.gov (United States)

    Griesenbeck, John S; Steck, Michelle D; Huber, John C; Sharkey, Joseph R; Rene, Antonio A; Brender, Jean D

    2009-04-06

    Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ) as adapted for use in the National Birth Defects Prevention Study. Multiple reference databases were searched for published literature reflecting nitrate, nitrite, and nitrosamine values in foods. Relevant published literature was reviewed; only publications reporting results for items listed on the WFFQ were selected for inclusion. The references selected were prioritized according to relevance to the U.S. population. Based on our estimates, vegetable products contain the highest levels of nitrate, contributing as much as 189 mg/serving. Meat and bean products contain the highest levels of nitrites with values up to 1.84 mg/serving. Alcohol, meat and dairy products contain the highest values of nitrosamines with a maximum value of 0.531 microg/serving. The estimates of dietary nitrates, nitrites, and nitrosamines generated in this study are based on the published values currently available. To our knowledge, these are the only estimates specifically designed for use with the adapted WFFQ and generated to represent food items available to the U.S. population. The estimates provided may be useful in other research studies, specifically in those exploring the relation between exposure to these compounds in foods and adverse health outcomes.

  18. International collaboration including patients is essential to develop new therapies for patients with myositis.

    Science.gov (United States)

    Lundberg, Ingrid E; Vencovsky, Jiri

    2017-05-01

    To discuss the needs for international collaborations between investigators in different disciplines working with myositis and with patients with myositis. Recent advances in detection of several myositis-specific autoantibodies that are associated with distinct clinical phenotypes, will enable studies in new well defined clinically homogenous subgroups of myositis This is likely to lead to development of new information on molecular pathogenesis that might be different in different myositis subgroups. Subgrouping patients according to autoantibody profile may also be important to assess outcome, to identify prognostic biomarkers and in clinical trials. As these are rare disorders international collaboration is essential to enrol large enough cohorts of the subgroups. To facilitate such collaboration we have developed a web-based international myositis register, www.euromyositis.eu, which includes validated outcome measures and patient reported outcome measures. This register is to support research but also to support decision-making in the clinic. We welcome investigators to join the Euromyositis register. Myositis is a heterogeneous disorder with varying treatment response and outcome. There is a high unmet need for new therapies which can only be achieved by increased knowledge on molecular disease mechanisms. Subgrouping patients according to autoantibody profile may be a new way forward to get a better understanding on disease mechanisms and to develop novel therapies.

  19. Estimation dose in patients of nuclear medicine. Implementation of a calculi program and methodology

    International Nuclear Information System (INIS)

    Prieto, C.; Espana, M.L.; Tomasi, L.; Lopez Franco, P.

    1998-01-01

    Our hospital is developing a nuclear medicine quality assurance program in order to comply with medical exposure Directive 97/43 EURATOM and the legal requirements established in our legislation. This program includes the quality control of equipment and, in addition, the dose estimation in patients undergoing nuclear medicine examinations. This paper is focused in the second aspect, and presents a new computer program, developed in our Department, in order to estimate the absorbed dose in different organs and the effective dose to the patients, based upon the data from the ICRP publication 53 and its addendum. (Author) 16 refs

  20. Estimation of future outflows of e-waste in India

    International Nuclear Information System (INIS)

    Dwivedy, Maheshwar; Mittal, R.K.

    2010-01-01

    The purpose of this study is to construct an approach and a methodology to estimate the future outflows of electronic waste (e-waste) in India. Consequently, the study utilizes a time-series multiple lifespan end-of-life model proposed by Peralta and Fontanos for estimating the current and future quantities of e-waste in India. The model estimates future e-waste generation quantities by modeling their usage and disposal. The present work considers two scenarios for the approximation of e-waste generation based on user preferences to store or to recycle the e-waste. This model will help formal recyclers in India to make strategic decisions in planning for appropriate recycling infrastructure and institutional capacity building. Also an extension of the model proposed by Peralta and Fontanos is developed with the objective of helping decision makers to conduct WEEE estimates under a variety of assumptions to suit their region of study. During 2007-2011, the total WEEE estimates will be around 2.5 million metric tons which include waste from personal computers (PC), television, refrigerators and washing machines. During the said period, the waste from PC will account for 30% of total units of WEEE generated.

  1. Monte Carlo next-event point flux estimation for RCP01

    International Nuclear Information System (INIS)

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-01-01

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator's efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn't needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty

  2. Evaluation of Rock Stress Estimation by the Kaiser effect

    International Nuclear Information System (INIS)

    Lehtonen, A.

    2005-11-01

    The knowledge of in situ stress is the key input parameter in many rock mechanics analyses. Information on stress allows the definition of boundary conditions for various modelling and engineering tasks. Presently, the estimation of stresses in bedrock is one of the most difficult, time-consuming and high-priced rock mechanical investigations. In addition, the methods used today have not evolved significantly in many years. This brings out a demand for novel, more economical and practical methods for stress estimation. In this study, one such method, Kaiser effect based on acoustic emission of core samples, has been evaluated. It can be described as a 'memory' in rock that is indicated by a change in acoustic emission emitted during uniaxial loading test. The most tempting feature of this method is the ability to estimate the in situ stress state from core specimens in laboratory conditions. This yields considerable cost savings compared to laborious borehole measurements. Kaiser effect has been studied in order to determine in situ stresses for decades without any major success. However, recent studies in Australia and China have been promising and made the estimation of stress tensor possible from differently oriented core samples. The aim of this work has been to develop a similar estimation method in Finland (including both equipment and data reduction), and to test it on samples obtained from Olkiluoto, Eurajoki. The developed measuring system proved to work well. The quality of obtained data varied, but they were still interpretable. The results obtained from these tests were compared with results of previous overcoring measurements, and they showed quite good correlation. Thus, the results were promising, but the method still needs further development and more testing before the final decision on its feasibility can be made. (orig.)

  3. Study on the Leak Rate Estimation of SG Tubes and Residual Stress Estimation based on Plastic Deformation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Jin; Chang, Yoon Suk; Lee, Dock Jin; Lee, Tae Rin; Choi, Shin Beom; Jeong, Jae Uk; Yeum, Seung Won [Sungkyunkwan University, Seoul (Korea, Republic of)

    2009-02-15

    In this research project, a leak rate estimation model was developed for steam generator tubes with through wall cracks. The modelling was based on the leak data from 23 tube specimens. Also, the procedure of finite element analysis was developed for residual stress calculation of dissimilar metal weld in a bottom mounted instrumentation. The effect of geometric variables related with the residual stress in penetration weld part was investigated by using the developed analysis procedure. The key subjects dealt in this research are: 1. Development of leak rate estimation model for steam generator tubes with through wall cracks 2. Development of the program which can perform the structure and leakage integrity evaluation for steam generator tubes 3. Development of analysis procedure for bottom mounted instrumentation weld residual stress 4. Analysis on the effects of geometric variables on weld residual stress It is anticipated that the technologies developed in this study are applicable for integrity estimation of steam generator tubes and weld part in NPP.

  4. Parametric cost estimation for space science missions

    Science.gov (United States)

    Lillie, Charles F.; Thompson, Bruce E.

    2008-07-01

    Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.

  5. Estimation of noise-free variance to measure heterogeneity.

    Directory of Open Access Journals (Sweden)

    Tilo Winkler

    Full Text Available Variance is a statistical parameter used to characterize heterogeneity or variability in data sets. However, measurements commonly include noise, as random errors superimposed to the actual value, which may substantially increase the variance compared to a noise-free data set. Our aim was to develop and validate a method to estimate noise-free spatial heterogeneity of pulmonary perfusion using dynamic positron emission tomography (PET scans. On theoretical grounds, we demonstrate a linear relationship between the total variance of a data set derived from averages of n multiple measurements, and the reciprocal of n. Using multiple measurements with varying n yields estimates of the linear relationship including the noise-free variance as the constant parameter. In PET images, n is proportional to the number of registered decay events, and the variance of the image is typically normalized by the square of its mean value yielding a coefficient of variation squared (CV(2. The method was evaluated with a Jaszczak phantom as reference spatial heterogeneity (CV(r(2 for comparison with our estimate of noise-free or 'true' heterogeneity (CV(t(2. We found that CV(t(2 was only 5.4% higher than CV(r2. Additional evaluations were conducted on 38 PET scans of pulmonary perfusion using (13NN-saline injection. The mean CV(t(2 was 0.10 (range: 0.03-0.30, while the mean CV(2 including noise was 0.24 (range: 0.10-0.59. CV(t(2 was in average 41.5% of the CV(2 measured including noise (range: 17.8-71.2%. The reproducibility of CV(t(2 was evaluated using three repeated PET scans from five subjects. Individual CV(t(2 were within 16% of each subject's mean and paired t-tests revealed no difference among the results from the three consecutive PET scans. In conclusion, our method provides reliable noise-free estimates of CV(t(2 in PET scans, and may be useful for similar statistical problems in experimental data.

  6. Estimation of interface resistivity in bonded Si for the development of high performance radiation detectors

    International Nuclear Information System (INIS)

    Kanno, Ikuo; Yamashita, Makoto; Nomiya, Seiichiro; Onabe, Hideaki

    2007-01-01

    For the development of high performance radiation detectors, direct bonding of Si wafers would be an useful method. Previously, p-n bonded Si were fabricated and they showed diode characteristics. The interface resistivity was, however, not investigated in detail. For the study of interface resistivity, n-type Si wafers with different resistivities were bonded. The resistivity of bonded Si wafers were measured and the interface resistivity was estimated by comparing with the results of model calculations. (author)

  7. Development of a NSSS T/H Module for the YGN 1/2 NPP Simulator Using a Best-Estimate Code, RETRAN

    International Nuclear Information System (INIS)

    Seo, I. Y.; Lee, Y. K.; Jeun, G. D.; Suh, J. S.

    2005-01-01

    KEPRI(Korea Electric Power Research Institute) developed a realistic nuclear steam supply system thermal-hydraulic module, named ARTS code, based on the best-estimate code RETRAN for the improvement of the KNPEC(Korea Nuclear Plant Education Center) unit 2 full-scope simulator. In this work, we make a nuclear steam supply system thermal-hydraulic module for the YGN 1/2 nuclear power plant simulator using a practical application of a experience of ARTS code development. The ARTS code was developed based on RETRAN, which is a best estimate code developed by EPRI(Electric Power Research Institute) for various transient analyses of NPP(Nuclear Power Plants). Robustness and the real time calculation capability have been improved by simplifications, removing of discontinuities of the physical correlations of the RETRAN code and some other modifications. And its scope for the simulation has been extended by supplementation of new calculation modules such as a dedicated pressurizer relief tank model and a backup model. The supplement is developed so that users cannot recognize the model change from the main ARTS module

  8. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  9. Development of the town data base: Estimates of exposure rates and times of fallout arrival near the Nevada Test Site

    International Nuclear Information System (INIS)

    Thompson, C.B.; McArthur, R.D.; Hutchinson, S.W.

    1994-09-01

    As part of the U.S. Department of Energy's Off-Site Radiation Exposure Review Project, the time of fallout arrival and the H+12 exposure rate were estimated for populated locations in Arizona, California, Nevada, and Utah that were affected by fallout from one or more nuclear tests at the Nevada Test Site. Estimates of exposure rate were derived from measured values recorded before and after each test by fallout monitors in the field. The estimate for a given location was obtained by retrieving from a data base all measurements made in the vicinity, decay-correcting them to H+12, and calculating an average. Estimates were also derived from maps produced after most events that show isopleths of exposure rate and time of fallout arrival. Both sets of isopleths on these maps were digitized, and kriging was used to interpolate values at the nodes of a 10-km grid covering the pattern. The values at any location within the grid were then estimated from the values at the surrounding grid nodes. Estimates of dispersion (standard deviation) were also calculated. The Town Data Base contains the estimates for all combinations of location and nuclear event for which the estimated mean H+12 exposure rate was greater than three times background. A listing of the data base is included as an appendix. The information was used by other project task groups to estimate the radiation dose that off-site populations and individuals may have received as a result of exposure to fallout from Nevada nuclear tests

  10. Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  11. Lower limb muscle volume estimation from maximum cross-sectional area and muscle length in cerebral palsy and typically developing individuals.

    Science.gov (United States)

    Vanmechelen, Inti M; Shortland, Adam P; Noble, Jonathan J

    2018-01-01

    Deficits in muscle volume may be a significant contributor to physical disability in young people with cerebral palsy. However, 3D measurements of muscle volume using MRI or 3D ultrasound may be difficult to make routinely in the clinic. We wished to establish whether accurate estimates of muscle volume could be made from a combination of anatomical cross-sectional area and length measurements in samples of typically developing young people and young people with bilateral cerebral palsy. Lower limb MRI scans were obtained from the lower limbs of 21 individuals with cerebral palsy (14.7±3years, 17 male) and 23 typically developing individuals (16.8±3.3years, 16 male). The volume, length and anatomical cross-sectional area were estimated from six muscles of the left lower limb. Analysis of Covariance demonstrated that the relationship between the length*cross-sectional area and volume was not significantly different depending on the subject group. Linear regression analysis demonstrated that the product of anatomical cross-sectional area and length bore a strong and significant relationship to the measured muscle volume (R 2 values between 0.955 and 0.988) with low standard error of the estimates of 4.8 to 8.9%. This study demonstrates that muscle volume may be estimated accurately in typically developing individuals and individuals with cerebral palsy by a combination of anatomical cross-sectional area and muscle length. 2D ultrasound may be a convenient method of making these measurements routinely in the clinic. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A method for estimation of fatigue properties from hardness of materials through construction of expert system

    International Nuclear Information System (INIS)

    Jeon, Woo Soo; Song, Ji Ho

    2001-01-01

    An expert system for estimation of fatigue properties from simple tensile data of material is developed, considering nearly all important estimation methods proposed so far, i.e., 7 estimation methods. The expert system is developed to utilize for the case of only hardness data available. The knowledge base is constructed with production rules and frames using an expert system shell, UNIK. Forward chaining is employed as a reasoning method. The expert system has three functions including the function to update the knowledge base. The performance of the expert system is tested using the 54 ε-N curves consisting of 381 ε-N data points obtained for 22 materials. It is found that the expert system developed has excellent performance especially for steel materials, and reasonably good for aluminum alloys

  13. Regional estimation of extreme suspended sediment concentrations using watershed characteristics

    Science.gov (United States)

    Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy

    2010-01-01

    SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.

  14. Empirical estimation of default and asset correlation of large corporates and banks in India

    OpenAIRE

    Bandyopadhyay, Arindam; Ganguly, Sonali

    2011-01-01

    Estimation of default and asset correlation is crucial for banks to manage and measure portfolio credit risk. This would require studying the risk profile of the banks’ entire credit portfolio and developing the appropriate methodology for the estimation of default dependence. Measurement and management of correlation risk in the credit portfolio of banks has also become an important area of concern for bank regulators worldwide. The BCBS (2006) has specifically included an asset correlation ...

  15. Estimates of the Damage Costs of Climate Change. Part 1. Benchmark Estimates

    International Nuclear Information System (INIS)

    Tol, R.S.J.

    2002-01-01

    A selection of the potential impacts of climate change - on agriculture, forestry, unmanaged ecosystems, sea level rise, human mortality, energy consumption, and water resources - are estimated and valued in monetary terms. Estimates are derived from globally comprehensive, internally consistent studies using GCM based scenarios. An underestimate of the uncertainty is given. New impact studies can be included following the meta-analytical methods described here. A 1C increase in the global mean surface air temperature would have, on balance, a positive effect on the OECD, China, and the Middle East, and a negative effect on other countries. Confidence intervals of regionally aggregated impacts, however, include both positive and negative impacts for all regions. Global estimates depend on the aggregation rule. Using a simple sum, world impact of a 1C warming would be a positive 2% of GDP, with a standard deviation of 1%. Using globally averaged values, world impact would be a negative 3% (standard deviation: 1%). Using equity weighting, world impact would amount to 0% (standard deviation: 1%)

  16. Urban forest biomass estimates: is it important to use allometric relationships developed specifically for urban trees? 

    Science.gov (United States)

    M.R. McHale; I.C. Burke; M.A. Lefsky; P.J. Peper; E.G. McPherson

    2009-01-01

    Many studies have analyzed the benefits, costs, and carbon storage capacity associated with urban trees. These studies have been limited by a lack of research on urban tree biomass, such that estimates of carbon storage in urban systems have relied upon allometric relationships developed in traditional forests. As urbanization increases globally, it is becoming...

  17. Developing a framework for estimating the potential impact of obesity interventions in a European city.

    Science.gov (United States)

    Whitfield, Malcolm; Bhanbhro, Sadiq; Green, Geoff; Lewis, Kevin; Hindle, Linda; Levy, Cathy

    2016-09-01

    Obesity is a global challenge for healthy populations. It has given rise to a wide range of public health interventions, focusing on supportive environments and lifestyle change, including diet, physical activity and behavioural change initiatives. Impact is variable. However, more evidence is slowly becoming available and is being used to develop new interventions. In a period of austerity, momentum is building to review these initiatives and understand what they do, how they do it and how they fit together. Our project seeks to develop a relatively straight forward systematic framework using readily accessible data to map the complex web of initiatives at a policy, population, group and individual level aiming to promote healthy lifestyles, diet and physical activity levels or to reduce obesity through medical treatments in a city or municipality population. It produces a system for classifying different types of interventions into groupings which will enable commissioners to assess the scope and distribution of interventions and make a judgement about gaps in provision and the likely impact on mean body mass index (BMI) as a proxy measure for health. Estimated impact in each level or type of intervention is based upon a summary of the scientific evidence of clinical and/or cost effectiveness. Finally it seeks, where possible, to quantify the potential effects of different types of interventions on BMI and produce a cost per unit of BMI reduced. This approach is less sophisticated but identifies the areas where more sophisticated evaluation would add value. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  19. Model-based estimation with boundary side information or boundary regularization

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Fessler, J.A.; Clinthorne, N.H.; Hero, A.O.

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (Emission Computed Tomography). The authors have also reported difficulties with boundary estimation in low contrast and low count rate situations. In this paper, the authors propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, the authors introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. The authors implement boundary regularization through formulating a penalized log-likelihood function. The authors also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information

  20. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  1. New development is targeted at the future nuclear generating market

    Energy Technology Data Exchange (ETDEWEB)

    1974-01-01

    The results are given of a survey of the reserves, exploitation and processing of uranium ore in the USA, Australia, Canada, France (including Gabon and Niger), and South Africa indicating estimated development up to 1990 and the estimated impact of the development of mining on the world natural uranium and enriched fuel market.

  2. Allometric Models Based on Bayesian Frameworks Give Better Estimates of Aboveground Biomass in the Miombo Woodlands

    Directory of Open Access Journals (Sweden)

    Shem Kuyah

    2016-02-01

    Full Text Available The miombo woodland is the most extensive dry forest in the world, with the potential to store substantial amounts of biomass carbon. Efforts to obtain accurate estimates of carbon stocks in the miombo woodlands are limited by a general lack of biomass estimation models (BEMs. This study aimed to evaluate the accuracy of most commonly employed allometric models for estimating aboveground biomass (AGB in miombo woodlands, and to develop new models that enable more accurate estimation of biomass in the miombo woodlands. A generalizable mixed-species allometric model was developed from 88 trees belonging to 33 species ranging in diameter at breast height (DBH from 5 to 105 cm using Bayesian estimation. A power law model with DBH alone performed better than both a polynomial model with DBH and the square of DBH, and models including height and crown area as additional variables along with DBH. The accuracy of estimates from published models varied across different sites and trees of different diameter classes, and was lower than estimates from our model. The model developed in this study can be used to establish conservative carbon stocks required to determine avoided emissions in performance-based payment schemes, for example in afforestation and reforestation activities.

  3. Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation

    Science.gov (United States)

    Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.

    2012-01-01

    Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.

  4. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Science.gov (United States)

    Haakenstad, Annie; Johnson, Elizabeth; Graves, Casey; Olivier, Jill; Duff, Jean; Dieleman, Joseph L

    2015-01-01

    Faith-based organizations (FBOs) have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH) channeled through these organizations. Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs. In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs. Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  5. Estimating the development assistance for health provided to faith-based organizations, 1990-2013.

    Directory of Open Access Journals (Sweden)

    Annie Haakenstad

    Full Text Available Faith-based organizations (FBOs have been active in the health sector for decades. Recently, the role of FBOs in global health has been of increased interest. However, little is known about the magnitude and trends in development assistance for health (DAH channeled through these organizations.Data were collected from the 21 most recent editions of the Report of Voluntary Agencies. These reports provide information on the revenue and expenditure of organizations. Project-level data were also collected and reviewed from the Bill & Melinda Gates Foundation and the Global Fund to Fight AIDS, Tuberculosis and Malaria. More than 1,900 non-governmental organizations received funds from at least one of these three organizations. Background information on these organizations was examined by two independent reviewers to identify the amount of funding channeled through FBOs.In 2013, total spending by the FBOs identified in the VolAg amounted to US$1.53 billion. In 1990, FB0s spent 34.1% of total DAH provided by private voluntary organizations reported in the VolAg. In 2013, FBOs expended 31.0%. Funds provided by the Global Fund to FBOs have grown since 2002, amounting to $80.9 million in 2011, or 16.7% of the Global Fund's contributions to NGOs. In 2011, the Gates Foundation's contributions to FBOs amounted to $7.1 million, or 1.1% of the total provided to NGOs.Development assistance partners exhibit a range of preferences with respect to the amount of funds provided to FBOs. Overall, estimates show that FBOS have maintained a substantial and consistent share over time, in line with overall spending in global health on NGOs. These estimates provide the foundation for further research on the spending trends and effectiveness of FBOs in global health.

  6. Application of third molar development and eruption models in estimating dental age in Malay sub-adults.

    Science.gov (United States)

    Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc

    2015-08-01

    The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  7. Food Insecurity in U.S. Households That Include Children with Disabilities

    Science.gov (United States)

    Sonik, Rajan; Parish, Susan L.; Ghosh, Subharati; Igdalsky, Leah

    2016-01-01

    The authors examined food insecurity in households including children with disabilities, analyzing data from the 2004 and 2008 panels of the Survey of Income and Program Participation, which included 24,729 households with children, 3,948 of which had children with disabilities. Logistic regression models were used to estimate the likelihood of…

  8. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    Science.gov (United States)

    Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A.

    2016-09-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia.

  9. Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery

    International Nuclear Information System (INIS)

    Ustinov, A; Khayrullina, A; Khmelik, M; Sveshnikova, A; Borzenko, V

    2016-01-01

    Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. (paper)

  10. Development of estimates of dietary nitrates, nitrites, and nitrosamines for use with the short willet food frequency questionnaire

    Directory of Open Access Journals (Sweden)

    Rene Antonio A

    2009-04-01

    Full Text Available Abstract Background Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. Methods We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ as adapted for use in the National Birth Defects Prevention Study. Multiple reference databases were searched for published literature reflecting nitrate, nitrite, and nitrosamine values in foods. Relevant published literature was reviewed; only publications reporting results for items listed on the WFFQ were selected for inclusion. The references selected were prioritized according to relevance to the U.S. population. Results Based on our estimates, vegetable products contain the highest levels of nitrate, contributing as much as 189 mg/serving. Meat and bean products contain the highest levels of nitrites with values up to 1.84 mg/serving. Alcohol, meat and dairy products contain the highest values of nitrosamines with a maximum value of 0.531 μg/serving. The estimates of dietary nitrates, nitrites, and nitrosamines generated in this study are based on the published values currently available. Conclusion To our knowledge, these are the only estimates specifically designed for use with the adapted WFFQ and generated to represent food items available to the U.S. population. The estimates provided may be useful in other research studies, specifically in those exploring the relation between exposure to these compounds in foods and adverse health outcomes.

  11. Development of estimates of dietary nitrates, nitrites, and nitrosamines for use with the short willet food frequency questionnaire

    Science.gov (United States)

    Griesenbeck, John S; Steck, Michelle D; Huber, John C; Sharkey, Joseph R; Rene, Antonio A; Brender, Jean D

    2009-01-01

    Background Studies have suggested that nitrates, nitrites, and nitrosamines have an etiologic role in adverse pregnancy outcomes and chronic diseases such as cancer. Although an extensive body of literature exists on estimates of these compounds in foods, the extant data varies in quality, quantified estimates, and relevance. Methods We developed estimates of nitrates, nitrites, and nitrosamines for food items listed in the Short Willet Food Frequency Questionnaire (WFFQ) as adapted for use in the National Birth Defects Prevention Study. Multiple reference databases were searched for published literature reflecting nitrate, nitrite, and nitrosamine values in foods. Relevant published literature was reviewed; only publications reporting results for items listed on the WFFQ were selected for inclusion. The references selected were prioritized according to relevance to the U.S. population. Results Based on our estimates, vegetable products contain the highest levels of nitrate, contributing as much as 189 mg/serving. Meat and bean products contain the highest levels of nitrites with values up to 1.84 mg/serving. Alcohol, meat and dairy products contain the highest values of nitrosamines with a maximum value of 0.531 μg/serving. The estimates of dietary nitrates, nitrites, and nitrosamines generated in this study are based on the published values currently available. Conclusion To our knowledge, these are the only estimates specifically designed for use with the adapted WFFQ and generated to represent food items available to the U.S. population. The estimates provided may be useful in other research studies, specifically in those exploring the relation between exposure to these compounds in foods and adverse health outcomes. PMID:19348679

  12. OECD/CSNI Workshop on Best Estimate Methods and Uncertainty Evaluations - Workshop Proceedings

    International Nuclear Information System (INIS)

    2013-01-01

    Best-Estimate Methods plus Uncertainty Evaluation are gaining increased interest in the licensing process. On the other hand, lessons learnt from the BEMUSE (NEA/CSNI/R(2011)3) and SM2A (NEA/CSNI/R(2011)3) benchmarks, progress of UAM benchmark, and answers to the WGAMA questionnaire on the Use of Best-Estimate Methodologies show that improvements of the present methods are necessary and new applications appear. The objective of this workshop is to provide a forum for a wide range of experts to exchange information in the area of best estimate analysis and uncertainty evaluation methods and address issues drawn-up from BEMUSE, UAM and SM2A activities. Both, improvement of existing methods and recent new developments are included. As a result of the workshop development, a set of recommendations, including lines for future activities were proposed. The organisation of the Workshop was divided into three parts: Opening session including key notes from OECD and IAEA representatives, Technical sessions, and a Wrap-up session. All sessions included a debate with participation from the audience constituted by 71 attendees. The workshop consisted of four technical sessions: a) Development achievements of BEPU methods and State of the Art: The objective of this session was to present the different approaches to deal with Best Estimate codes and uncertainties evaluations. A total of six papers were presented. One initial paper summarized the existing methods; the following open papers were focused on specific methods stressing their bases, peculiarities and advantages. As a result of the session a picture of the current State of the Art was obtained. b) International comparative activities: This session reviewed the set of international activities around the subject of BEPU methods benchmarking and development. From each of the activities a description of the objectives, development, main results, conclusions and recommendations (in case it is finalized) was presented. This

  13. CosmoSIS: A System for MC Parameter Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Zuntz, Joe [Manchester U.; Paterno, Marc [Fermilab; Jennings, Elise [Chicago U., EFI; Rudd, Douglas [U. Chicago; Manzotti, Alessandro [Chicago U., Astron. Astrophys. Ctr.; Dodelson, Scott [Chicago U., Astron. Astrophys. Ctr.; Bridle, Sarah [Manchester U.; Sehrish, Saba [Fermilab; Kowalkowski, James [Fermilab

    2015-01-01

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.

  14. Development of GP and GEP models to estimate an environmental issue induced by blasting operation.

    Science.gov (United States)

    Faradonbeh, Roohollah Shirani; Hasanipanah, Mahdi; Amnieh, Hassan Bakhshandeh; Armaghani, Danial Jahed; Monjezi, Masoud

    2018-05-21

    Air overpressure (AOp) is one of the most adverse effects induced by blasting in the surface mines and civil projects. So, proper evaluation and estimation of the AOp is important for minimizing the environmental problems resulting from blasting. The main aim of this study is to estimate AOp produced by blasting operation in Miduk copper mine, Iran, developing two artificial intelligence models, i.e., genetic programming (GP) and gene expression programming (GEP). Then, the accuracy of the GP and GEP models has been compared to multiple linear regression (MLR) and three empirical models. For this purpose, 92 blasting events were investigated, and subsequently, the AOp values were carefully measured. Moreover, in each operation, the values of maximum charge per delay and distance from blast points, as two effective parameters on the AOp, were measured. After predicting by the predictive models, their performance prediction was checked in terms of variance account for (VAF), coefficient of determination (CoD), and root mean square error (RMSE). Finally, it was found that the GEP with VAF of 94.12%, CoD of 0.941, and RMSE of 0.06 is a more precise model than other predictive models for the AOp prediction in the Miduk copper mine, and it can be introduced as a new powerful tool for estimating the AOp resulting from blasting.

  15. Thermal Protection System Mass Estimating Relationships For Blunt-Body, Earth Entry Spacecraft

    Science.gov (United States)

    Sepka, Steven A.; Samareh, Jamshid A.

    2015-01-01

    Mass estimating relationships (MERs) are developed to predict the amount of thermal protection system (TPS) necessary for safe Earth entry for blunt-body spacecraft using simple correlations that are non-ITAR and closely match estimates from NASA's highfidelity ablation modeling tool, the Fully Implicit Ablation and Thermal Analysis Program (FIAT). These MERs provide a first order estimate for rapid feasibility studies. There are 840 different trajectories considered in this study, and each TPS MER has a peak heating limit. MERs for the vehicle forebody include the ablators Phenolic Impregnated Carbon Ablator (PICA) and Carbon Phenolic atop Advanced Carbon-Carbon. For the aftbody, the materials are Silicone Impregnated Reusable Ceramic Ablator (SIRCA), Acusil II, SLA- 561V, and LI-900. The MERs are accurate to within 14% (at one standard deviation) of FIAT prediction, and the most any MER can under predict FIAT TPS thickness is 18.7%. This work focuses on the development of these MERs, the resulting equations, model limitations, and model accuracy.

  16. 25 CFR 170.807 - What must BIA include when it develops an IRR Transportation Facilities Maintenance Management...

    Science.gov (United States)

    2010-04-01

    ... Transportation Facilities Maintenance Management System? 170.807 Section 170.807 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER INDIAN RESERVATION ROADS PROGRAM BIA Road Maintenance § 170.807 What must BIA include when it develops an IRR Transportation Facilities Maintenance Management System...

  17. Estimating abundance of mountain lions from unstructured spatial sampling

    Science.gov (United States)

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and

  18. Estimating population effects of vaccination using large, routinely collected data.

    Science.gov (United States)

    Halloran, M Elizabeth; Hudgens, Michael G

    2018-01-30

    Vaccination in populations can have several kinds of effects. Establishing that vaccination produces population-level effects beyond the direct effects in the vaccinated individuals can have important consequences for public health policy. Formal methods have been developed for study designs and analysis that can estimate the different effects of vaccination. However, implementing field studies to evaluate the different effects of vaccination can be expensive, of limited generalizability, or unethical. It would be advantageous to use routinely collected data to estimate the different effects of vaccination. We consider how different types of data are needed to estimate different effects of vaccination. The examples include rotavirus vaccination of young children, influenza vaccination of elderly adults, and a targeted influenza vaccination campaign in schools. Directions for future research are discussed. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Spring Small Grains Area Estimation

    Science.gov (United States)

    Palmer, W. F.; Mohler, R. J.

    1986-01-01

    SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.

  20. Development of a customised design flood estimation tool to ...

    African Journals Online (AJOL)

    The estimation of design flood events, i.e., floods characterised by a specific magnitude-frequency relationship, at a particular site in a specific region is necessary for the planning, design and operation of hydraulic structures. Both the occurrence and frequency of flood events, along with the uncertainty involved in the ...

  1. Development and Clinical Evaluation of a Three-Dimensional Cone-Beam Computed Tomography Estimation Method Using a Deformation Field Map

    International Nuclear Information System (INIS)

    Ren, Lei; Chetty, Indrin J.; Zhang Junan; Jin Jianyue; Wu, Q. Jackie; Yan Hui; Brizel, David M.; Lee, W. Robert; Movsas, Benjamin; Yin Fangfang

    2012-01-01

    Purpose: To develop a three-dimensional (3D) cone-beam computed tomography (CBCT) estimation method using a deformation field map, and to evaluate and optimize the efficiency and accuracy of the method for use in the clinical setting. Methods and Materials: We propose a method to estimate patient CBCT images using prior information and a deformation model. Patients’ previous CBCT data are used as the prior information, and the new CBCT volume to be estimated is considered as a deformation of the prior image volume. The deformation field map is solved by minimizing deformation energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. This method was implemented in 3D form using hardware acceleration and multi-resolution scheme, and it was evaluated for different scan angles, projection numbers, and scan directions using liver, lung, and prostate cancer patient data. The accuracy of the estimation was evaluated by comparing the organ volume difference and the similarity between estimated CBCT and the CBCT reconstructed from fully sampled projections. Results: Results showed that scan direction and number of projections do not have significant effects on the CBCT estimation accuracy. The total scan angle is the dominant factor affecting the accuracy of the CBCT estimation algorithm. Larger scan angles yield better estimation accuracy than smaller scan angles. Lung cancer patient data showed that the estimation error of the 3D lung tumor volume was reduced from 13.3% to 4.3% when the scan angle was increased from 60° to 360° using 57 projections. Conclusions: The proposed estimation method is applicable for 3D DTS, 3D CBCT, four-dimensional CBCT, and four-dimensional DTS image estimation. This method has the potential for significantly reducing the imaging dose and improving the image quality by removing the organ distortion artifacts and streak artifacts shown in images reconstructed by the conventional Feldkamp

  2. Estimation of base temperatures for nine weed species.

    Science.gov (United States)

    Steinmaus, S J; Prather, T S; Holt, J S

    2000-02-01

    Experiments were conducted to test several methods for estimating low temperature thresholds for seed germination. Temperature responses of nine weeds common in annual agroecosystems were assessed in temperature gradient experiments. Species included summer annuals (Amaranthus albus, A. palmeri, Digitaria sanguinalis, Echinochloa crus-galli, Portulaca oleracea, and Setaria glauca), winter annuals (Hirschfeldia incana and Sonchus oleraceus), and Conyza canadensis, which is classified as a summer or winter annual. The temperature below which development ceases (Tbase) was estimated as the x-intercept of four conventional germination rate indices regressed on temperature, by repeated probit analysis, and by a mathematical approach. An overall Tbase estimate for each species was the average across indices weighted by the reciprocal of the variance associated with the estimate. Germination rates increased linearly with temperature between 15 degrees C and 30 degrees C for all species. Consistent estimates of Tbase were obtained for most species using several indices. The most statistically robust and biologically relevant method was the reciprocal time to median germination, which can also be used to estimate other biologically meaningful parameters. The mean Tbase for summer annuals (13.8 degrees C) was higher than that for winter annuals (8.3 degrees C). The two germination response characteristics, Tbase and slope (rate), influence a species' germination behaviour in the field since the germination inhibiting effects of a high Tbase may be offset by the germination promoting effects of a rapid germination response to temperature. Estimates of Tbase may be incorporated into predictive thermal time models to assist weed control practitioners in making management decisions.

  3. Single snapshot DOA estimation

    Science.gov (United States)

    Häcker, P.; Yang, B.

    2010-10-01

    In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.

  4. Global, regional and national levels and trends of preterm birth rates for 1990 to 2014: protocol for development of World Health Organization estimates.

    Science.gov (United States)

    Vogel, Joshua P; Chawanpaiboon, Saifon; Watananirun, Kanokwaroon; Lumbiganon, Pisake; Petzold, Max; Moller, Ann-Beth; Thinkhamrop, Jadsada; Laopaiboon, Malinee; Seuc, Armando H; Hogan, Daniel; Tunçalp, Ozge; Allanson, Emma; Betrán, Ana Pilar; Bonet, Mercedes; Oladapo, Olufemi T; Gülmezoglu, A Metin

    2016-06-17

    The official WHO estimates of preterm birth are an essential global resource for assessing the burden of preterm birth and developing public health programmes and policies. This protocol describes the methods that will be used to identify, critically appraise and analyse all eligible preterm birth data, in order to develop global, regional and national level estimates of levels and trends in preterm birth rates for the period 1990 - 2014. We will conduct a systematic review of civil registration and vital statistics (CRVS) data on preterm birth for all WHO Member States, via national Ministries of Health and Statistics Offices. For Member States with absent, limited or lower-quality CRVS data, a systematic review of surveys and/or research studies will be conducted. Modelling will be used to develop country, regional and global rates for 2014, with time trends for Member States where sufficient data are available. Member States will be invited to review the methodology and provide additional eligible data via a country consultation before final estimates are developed and disseminated. This research will be used to generate estimates on the burden of preterm birth globally for 1990 to 2014. We invite feedback on the methodology described, and call on the public health community to submit pertinent data for consideration. Registered at PROSPERO CRD42015027439 CONTACT: pretermbirth@who.int.

  5. Effect of water resource development and management on lymphatic filariasis, and estimates of populations at risk.

    Science.gov (United States)

    Erlanger, Tobias E; Keiser, Jennifer; Caldas De Castro, Marcia; Bos, Robert; Singer, Burton H; Tanner, Marcel; Utzinger, Jürg

    2005-09-01

    Lymphatic filariasis (LF) is a debilitating disease overwhelmingly caused by Wuchereria bancrofti, which is transmitted by various mosquito species. Here, we present a systematic literature review with the following objectives: (i) to establish global and regional estimates of populations at risk of LF with particular consideration of water resource development projects, and (ii) to assess the effects of water resource development and management on the frequency and transmission dynamics of the disease. We estimate that globally, 2 billion people are at risk of LF. Among them, there are 394.5 million urban dwellers without access to improved sanitation and 213 million rural dwellers living in close proximity to irrigation. Environmental changes due to water resource development and management consistently led to a shift in vector species composition and generally to a strong proliferation of vector populations. For example, in World Health Organization (WHO) subregions 1 and 2, mosquito densities of the Anopheles gambiae complex and Anopheles funestus were up to 25-fold higher in irrigated areas when compared with irrigation-free sites. Although the infection prevalence of LF often increased after the implementation of a water project, there was no clear association with clinical symptoms. Concluding, there is a need to assess and quantify changes of LF transmission parameters and clinical manifestations over the entire course of water resource developments. Where resources allow, integrated vector management should complement mass drug administration, and broad-based monitoring and surveillance of the disease should become an integral part of large-scale waste management and sanitation programs, whose basic rationale lies in a systemic approach to city, district, and regional level health services and disease prevention.

  6. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  7. Developing first time-series of land surface temperature from AATSR with uncertainty estimates

    Science.gov (United States)

    Ghent, Darren; Remedios, John

    2013-04-01

    Land surface temperature (LST) is the radiative skin temperature of the land, and is one of the key parameters in the physics of land-surface processes on regional and global scales. Earth Observation satellites provide the opportunity to obtain global coverage of LST approximately every 3 days or less. One such source of satellite retrieved LST has been the Advanced Along-Track Scanning Radiometer (AATSR); with LST retrieval being implemented in the AATSR Instrument Processing Facility in March 2004. Here we present first regional and global time-series of LST data from AATSR with estimates of uncertainty. Mean changes in temperature over the last decade will be discussed along with regional patterns. Although time-series across all three ATSR missions have previously been constructed (Kogler et al., 2012), the use of low resolution auxiliary data in the retrieval algorithm and non-optimal cloud masking resulted in time-series artefacts. As such, considerable ESA supported development has been carried out on the AATSR data to address these concerns. This includes the integration of high resolution auxiliary data into the retrieval algorithm and subsequent generation of coefficients and tuning parameters, plus the development of an improved cloud mask based on the simulation of clear sky conditions from radiance transfer modelling (Ghent et al., in prep.). Any inference on this LST record is though of limited value without the accompaniment of an uncertainty estimate; wherein the Joint Committee for Guides in Metrology quote an uncertainty as "a parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand that is the value of the particular quantity to be measured". Furthermore, pixel level uncertainty fields are a mandatory requirement in the on-going preparation of the LST product for the upcoming Sea and Land Surface Temperature (SLSTR) instrument on-board Sentinel-3

  8. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  9. Estimating the Natural Flow Regime of Rivers With Long-Standing Development: The Northern Branch of the Rio Grande

    Science.gov (United States)

    Blythe, Todd L.; Schmidt, John C.

    2018-02-01

    An estimate of a river's natural flow regime is useful for water resource planning and ecosystem rehabilitation by providing insight into the predisturbance form and function of a river. The natural flow regime of most rivers has been perturbed by development during the 20th century and in some cases, before stream gaging began. The temporal resolution of natural flows estimated using traditional methods is typically not sufficient to evaluate cues that drive native ecosystem function. Additionally, these traditional methods are watershed specific and require large amounts of data to produce accurate results. We present a mass balance method that estimates natural flows at daily time step resolution for the northern branch of the Rio Grande, upstream from the Rio Conchos, that relies only on easily obtained streamflow data. Using an analytical change point method, we identified periods of the measured flow regime during the 20th century for comparison with the estimated natural flows. Our results highlight the significant deviation from natural conditions that occurred during the 20th century. The total annual flow of the northern branch is 95% lower than it would be in the absence of human use. The current 2 year flood has decreased by more than 60%, is shorter in duration, and peaks later in the year. When compared to unregulated flows estimated using traditional mass balance accounting methods, our approach provides similar results.

  10. Shear and Turbulence Estimates for Calculation of Wind Turbine Loads and Responses Under Hurricane Strength Winds

    Science.gov (United States)

    Kosovic, B.; Bryan, G. H.; Haupt, S. E.

    2012-12-01

    Schwartz et al. (2010) recently reported that the total gross energy-generating offshore wind resource in the United States in waters less than 30m deep is approximately 1000 GW. Estimated offshore generating capacity is thus equivalent to the current generating capacity in the United States. Offshore wind power can therefore play important role in electricity production in the United States. However, most of this resource is located along the East Coast of the United States and in the Gulf of Mexico, areas frequently affected by tropical cyclones including hurricanes. Hurricane strength winds, associated shear and turbulence can affect performance and structural integrity of wind turbines. In a recent study Rose et al. (2012) attempted to estimate the risk to offshore wind turbines from hurricane strength winds over a lifetime of a wind farm (i.e. 20 years). According to Rose et al. turbine tower buckling has been observed in typhoons. They concluded that there is "substantial risk that Category 3 and higher hurricanes can destroy half or more of the turbines at some locations." More robust designs including appropriate controls can mitigate the risk of wind turbine damage. To develop such designs good estimates of turbine loads under hurricane strength winds are essential. We use output from a large-eddy simulation of a hurricane to estimate shear and turbulence intensity over first couple of hundred meters above sea surface. We compute power spectra of three velocity components at several distances from the eye of the hurricane. Based on these spectra analytical spectral forms are developed and included in TurbSim, a stochastic inflow turbulence code developed by the National Renewable Energy Laboratory (NREL, http://wind.nrel.gov/designcodes/preprocessors/turbsim/). TurbSim provides a numerical simulation including bursts of coherent turbulence associated with organized turbulent structures. It can generate realistic flow conditions that an operating turbine

  11. FINANCIAL STABILITY OF SMALL BUSINESS: THE ESTIMATION AND DYNAMICS OF REGIONAL DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    A.U. Makarova

    2008-03-01

    Full Text Available In clause the problem of an estimation of financial stability of the enterprises of small business is considered. It is offered to use traditional indicators of an estimation of the financial stability, describing it in the short-term period, and such indicators as equation of monetary streams, qualitative risk-management, a level äèâåðñèôèêàöèè business and competitive position of the company for an estimation of stability in the long-term period. On the basis of data of statistical supervision over small enterprises the estimation of dynamics of results of financial activity and financial stability of small enterprises of Sverdlovsk area is lead to 2001 – 2005 is drawn a conclusion about low parameters of solvency and financial stability of small enterprises of region in comparison with normative parameters and data on the large and average enterprises. The measures directed on increase of financial stability of the enterprises of small business are offered.

  12. Generalized shrunken type-GM estimator and its application

    International Nuclear Information System (INIS)

    Ma, C Z; Du, Y L

    2014-01-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points

  13. Generalized shrunken type-GM estimator and its application

    Science.gov (United States)

    Ma, C. Z.; Du, Y. L.

    2014-03-01

    The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.

  14. Cost Engineering Techniques and Their Applicability for Cost Estimation of Organic Rankine Cycle Systems

    Directory of Open Access Journals (Sweden)

    Sanne Lemmens

    2016-06-01

    Full Text Available The potential of organic Rankine cycle (ORC systems is acknowledged by both considerable research and development efforts and an increasing number of applications. Most research aims at improving ORC systems through technical performance optimization of various cycle architectures and working fluids. The assessment and optimization of technical feasibility is at the core of ORC development. Nonetheless, economic feasibility is often decisive when it comes down to considering practical instalments, and therefore an increasing number of publications include an estimate of the costs of the designed ORC system. Various methods are used to estimate ORC costs but the resulting values are rarely discussed with respect to accuracy and validity. The aim of this paper is to provide insight into the methods used to estimate these costs and open the discussion about the interpretation of these results. A review of cost engineering practices shows there has been a long tradition of industrial cost estimation. Several techniques have been developed, but the expected accuracy range of the best techniques used in research varies between 10% and 30%. The quality of the estimates could be improved by establishing up-to-date correlations for the ORC industry in particular. Secondly, the rapidly growing ORC cost literature is briefly reviewed. A graph summarizing the estimated ORC investment costs displays a pattern of decreasing costs for increasing power output. Knowledge on the actual costs of real ORC modules and projects remains scarce. Finally, the investment costs of a known heat recovery ORC system are discussed and the methodologies and accuracies of several approaches are demonstrated using this case as benchmark. The best results are obtained with factorial estimation techniques such as the module costing technique, but the accuracies may diverge by up to +30%. Development of correlations and multiplication factors for ORC technology in particular is

  15. Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)

    Science.gov (United States)

    Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.

  16. Developing versus developed companies in Business Excellence initiatives

    DEFF Research Database (Denmark)

    Haffer, Rafal; Kristensen, Kai

    2008-01-01

    The paper reports the advance of Polish companies in Business Excellence initiatives. It indicates how these activities influence their performance. EFQM Excellence Model indicators are used as the evaluation criteria for the study. The performance variable is introduced to ensure the calculation...... of correlations between EFQM model indicators and performance results. The data are next estimated as a structural equation model by partial least squares using SmartPLS software (Ringle et al., 2005). That estimation is conducted on the model of the Danish Business Excellence Index methodology (Kristensen et al...... results from the Business Excellence Model as a proxy for actual financial results in Poland. Data and results from a similar study done in Denmark are also described; thus, a comparison between developing Polish companies and developed Danish ones is included. Poland and Denmark are used as cases of...

  17. Estimation of lung cancer risk from environmental exposure to airborne plutonium from the Rocky Flats Plant

    International Nuclear Information System (INIS)

    Sutherland, J.V.

    1983-01-01

    A three-phase study was undertaken to (1) determine the nature of disagreement among scientists concerning risk of environmental release of plutonium, (2) develop an analytic procedure for determining risk based on clearly stated principles defensible by reference to the literature, and (3) develop estimates of radiation dose to the lung from exposure to plutonium in ambient air for the purpose of evaluating risk to an individual with a specified age and smoking history. Eleven epidemiologists, biostatisticians and radiation scientists participated in Phase I of the study. It was shown that no clearly stated analytical principles for risk estimation were in common use, resulting in widely divergent risk estimates. Five of these disagreeing scientists in Phase I (including all cancer epidemiologists in the Denver metropolitan area) were chosen for Phase II of the study. A single analytic procedure was developed which was unanimously agreed upon. This procedure was dependent on the estimate of dose to the lung from ambient air levels of Rocky Flats plutonium. In Phase III of the study, a panel of four radiation scientists developed a procedure for estimation of dose to the lung from chronic exposure to plutonium ambient air levels. Results from all phases of the study were used to develop a method for estimation of relative risk of lung cancer for an individual, given plutonium dose to the lung, age, smoking history and other radiation exposure

  18. A robust bayesian estimate of the concordance correlation coefficient.

    Science.gov (United States)

    Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir

    2015-01-01

    A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.

  19. Introduction to applied Bayesian statistics and estimation for social scientists

    CERN Document Server

    Lynch, Scott M

    2007-01-01

    ""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed

  20. Cost estimate guidelines for advanced nuclear power technologies

    International Nuclear Information System (INIS)

    Delene, J.G.; Hudson, C.R. II.

    1993-05-01

    Several advanced power plant concepts are currently under development. These include the Modular High Temperature Gas Cooled Reactors, the Advanced Liquid Metal Reactor and the Advanced Light Water Reactors. One measure of the attractiveness of a new concept is its cost. Invariably, the cost of a new type of power plant will be compared with other alternative forms of electrical generation. This report provides a common starting point, whereby the cost estimates for the various power plants to be considered are developed with common assumptions and ground rules. Comparisons can then be made on a consistent basis. This is the second update of these cost estimate guidelines. Changes have been made to make the guidelines more current (January 1, 1992) and in response to suggestions made as a result of the use of the previous report. The principal changes are that the reference site has been changed from a generic Northeast (Middletown) site to a more central site (EPRI's East/West Central site) and that reference bulk commodity prices and labor productivity rates have been added. This report is designed to provide a framework for the preparation and reporting of costs. The cost estimates will consist of the overnight construction cost, the total plant capital cost, the operation and maintenance costs, the fuel costs, decommissioning costs and the power production or busbar generation cost

  1. Estimated Perennial Streams of Idaho and Related Geospatial Datasets

    Science.gov (United States)

    Rea, Alan; Skinner, Kenneth D.

    2009-01-01

    The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of

  2. Three-dimensional analysis of third molar development to estimate age of majority.

    Science.gov (United States)

    Márquez-Ruiz, Ana Belén; Treviño-Tijerina, María Concepción; González-Herrera, Lucas; Sánchez, Belén; González-Ramírez, Amanda Rocío; Valenzuela, Aurora

    2017-09-01

    Third molars are one of the few biological markers available for age estimation in undocumented juveniles close the legal age of majority, assuming an age of 18years as the most frequent legal demarcation between child and adult status. To obtain more accurate visualization and evaluation of third molar mineralization patterns from computed tomography images, a new software application, DentaVol©, was developed. Third molar mineralization according to qualitative (Demirjian's maturational stage) and quantitative parameters (third molar volume) of dental development was assessed in multi-slice helical computed tomography images of both maxillary arches displayed by DentaVol© from 135 individuals (62 females and 73 males) aged between 14 and 23years. Intra- and inter-observer agreement values were remarkably high for both evaluation procedures and for all third molars. A linear correlation between third molar mineralization and chronological age was found, with third molar maturity occurring earlier in males than in females. Assessment of dental development with both procedures, by using DentaVol© software, can be considered a good indicator of age of majority (18years or older) in all third molars. Our results indicated that virtual computed tomography imaging can be considered a valid alternative to orthopantomography for evaluations of third molar mineralization, and therefore a complementary tool for determining the age of majority. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  3. Distributed Dynamic State Estimator, Generator Parameter Estimation and Stability Monitoring Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Meliopoulos, Sakis [Georgia Inst. of Technology, Atlanta, GA (United States); Cokkinides, George [Georgia Inst. of Technology, Atlanta, GA (United States); Fardanesh, Bruce [New York Power Authority, NY (United States); Hedrington, Clinton [U.S. Virgin Islands Water and Power Authority (WAPA), St. Croix (U.S. Virgin Islands)

    2013-12-31

    This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based

  4. Development of a New BRDF-Resistant Vegetation Index for Improving the Estimation of Leaf Area Index

    Directory of Open Access Journals (Sweden)

    Su Zhang

    2016-11-01

    Full Text Available The leaf area index (LAI is one of the most important Earth surface parameters used in the modeling of ecosystems and their interaction with climate. Numerous vegetation indices have been developed to estimate the LAI. However, because of the effects of the bi-directional reflectance distribution function (BRDF, most of these vegetation indices are also sensitive to the effect of BRDF. In this study, we aim to present a new BRDF-resistant vegetation index (BRVI, which is sensitive to the LAI but insensitive to the effect of BRDF. Firstly, the BRDF effects of different bands were investigated using both simulated data and in-situ measurements of winter wheat made at different growth stages. We found bi-directional shape similarity in the solar principal plane between the green and the near-infrared (NIR bands and between the blue and red bands for farmland soil conditions and with medium chlorophyll content level. Secondly, the consistency of the shape of the BRDF across different bands was employed to develop a new BRDF-resistant vegetation index for estimating the LAI. The reflectance ratios of the NIR band to the green band and the blue band to the red band were reasonably assumed to be resistant to the BRDF effects. Nevertheless, the variation amplitude of the bi-directional reflectance in the solar principal plane was different for different bands. The divisors in the two reflectance ratios were improved by combining the reflectances at the red and green bands. The new BRVI was defined as a normalized combination of the two improved reflectance ratios. Finally, the potential of the proposed BRVI for estimation of the LAI was evaluated using both simulated data and in-situ measurements and also compared to other popular vegetation indices. The results showed that the influence of the BRDF on the BRVI was the weakest and that the BRVI retrieved LAI values well, with a coefficient of determination (R2 of 0.84 and an RMSE of 0.83 for the field

  5. 36 CFR 1254.94 - What must my request include?

    Science.gov (United States)

    2010-07-01

    ... includes the following elements: (1) Record group number or agency of origin or, for donated historical... volume in number of pages or cubic feet. (b) The estimated amount of time (work-days) that the microfilm... who would require training (see § 1254.108(b)). (c) The number and a description of the equipment that...

  6. Development of a J-estimation scheme for internal circumferential and axial surface cracks in elbows

    International Nuclear Information System (INIS)

    Mohan, R.; Brust, F.W.; Ghadiali, N.; Wilkowski, G.

    1996-06-01

    This report summarizes efforts to develop elastic and elastic-plastic fracture mechanics analyses for internal surface cracks in elbows. The analyses involved development of a GE/EPRI type J-estimation scheme which requires an elastic and fully plastic contribution to crack-driving force in terms of the J-integral parameter. The elastic analyses require the development of F-function values to relate the J e term to applied loads. Similarly, the fully plastic analyses require the development of h-functions to relate the J p term to the applied loads. The F- and h-functions were determined from a matrix of finite element analyses. To minimize the cost of the analyses, three-dimensional ABAQUS finite element analyses were compared to a simpler finite element technique called the line-spring method. The line-spring method provides a significant computational savings over the full three-dimensional analysis. The comparison showed excellent agreement between the line-spring and three-dimensional analysis. This experience was consistent with comparisons with circumferential surface-crack analyses in straight pipes during the NRC's Short Cracks in Piping and Piping Welds program

  7. Generalized estimating equations

    CERN Document Server

    Hardin, James W

    2002-01-01

    Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th

  8. The Problem With Estimating Public Health Spending.

    Science.gov (United States)

    Leider, Jonathon P

    2016-01-01

    Accurate information on how much the United States spends on public health is critical. These estimates affect planning efforts; reflect the value society places on the public health enterprise; and allows for the demonstration of cost-effectiveness of programs, policies, and services aimed at increasing population health. Yet, at present, there are a limited number of sources of systematic public health finance data. Each of these sources is collected in different ways, for different reasons, and so yields strikingly different results. This article aims to compare and contrast all 4 current national public health finance data sets, including data compiled by Trust for America's Health, the Association of State and Territorial Health Officials (ASTHO), the National Association of County and City Health Officials (NACCHO), and the Census, which underlie the oft-cited National Health Expenditure Account estimates of public health activity. In FY2008, ASTHO estimates that state health agencies spent $24 billion ($94 per capita on average, median $79), while the Census estimated all state governmental agencies including state health agencies spent $60 billion on public health ($200 per capita on average, median $166). Census public health data suggest that local governments spent an average of $87 per capita (median $57), whereas NACCHO estimates that reporting LHDs spent $64 per capita on average (median $36) in FY2008. We conclude that these estimates differ because the various organizations collect data using different means, data definitions, and inclusion/exclusion criteria--most notably around whether to include spending by all agencies versus a state/local health department, and whether behavioral health, disability, and some clinical care spending are included in estimates. Alongside deeper analysis of presently underutilized Census administrative data, we see harmonization efforts and the creation of a standardized expenditure reporting system as a way to

  9. Estimating the welfare loss to households from natural disasters in developing countries: a contingent valuation study of flooding in Vietnam

    Science.gov (United States)

    Navrud, Ståle; Tuan, Tran Huu; Tinh, Bui Duc

    2012-01-01

    Background Natural disasters have severe impacts on the health and well-being of affected households. However, we find evidence that official damage cost assessments for floods and other natural disasters in Vietnam, where households have little or no insurance, clearly underestimate the total economic damage costs of these events as they do not include the welfare loss from mortality, morbidity and well-being experienced by the households affected by the floods. This should send a message to the local communities and national authorities that higher investments in flood alleviation, reduction and adaptive measures can be justified since the social benefits of these measures in terms of avoided damage costs are higher than previously thought. Methods We pioneer the use of the contingent valuation (CV) approach of willingness-to-contribute (WTC) labour to a flood prevention program, as a measure of the welfare loss experienced by household due to a flooding event. In a face-to-face household survey of 706 households in the Quang Nam province in Central Vietnam, we applied this approach together with reported direct physical damage in order to shed light of the welfare loss experienced by the households. We asked about households’ WTC labour and multiplied their WTC person-days of labour by an estimate for their opportunity cost of time in order to estimate the welfare loss to households from the 2007 floods. Results The results showed that this contingent valuation (CV) approach of asking about willingness-to-pay in-kind avoided the main problems associated with applying CV in developing countries. Conclusion Thus, the CV approach of WTC labour instead of money is promising in terms of capturing the total welfare loss of natural disasters, and promising in terms of further application in other developing countries and for other types of natural disasters. PMID:22761603

  10. Metric Indices for Performance Evaluation of a Mixed Measurement based State Estimator

    Directory of Open Access Journals (Sweden)

    Paula Sofia Vide

    2013-01-01

    Full Text Available With the development of synchronized phasor measurement technology in recent years, it gains great interest the use of PMU measurements to improve state estimation performances due to their synchronized characteristics and high data transmission speed. The ability of the Phasor Measurement Units (PMU to directly measure the system state is a key over SCADA measurement system. PMU measurements are superior to the conventional SCADA measurements in terms of resolution and accuracy. Since the majority of measurements in existing estimators are from conventional SCADA measurement system, it is hard to be fully replaced by PMUs in the near future so state estimators including both phasor and conventional SCADA measurements are being considered. In this paper, a mixed measurement (SCADA and PMU measurements state estimator is proposed. Several useful measures for evaluating various aspects of the performance of the mixed measurement state estimator are proposed and explained. State Estimator validity, performance and characteristics of the results on IEEE 14 bus test system and IEEE 30 bus test system are presented.

  11. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    Science.gov (United States)

    Over, Thomas M.; Saito, Riki J.; Veilleux, Andrea G.; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey L.

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, regional skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at ungaged

  12. The post-millennium development goals agenda: include 'end to all wars' as a public health goal!

    Science.gov (United States)

    Jayasinghe, Saroj

    2014-09-01

    The process of identifying global post-millennium development goals (post-MDGs) has begun in earnest. Consensus is emerging in certain areas (e.g. eliminating poverty) and conflicts and violence are recognized as key factors that retard human development. However, current discussions focus on tackling intra-state conflicts and individual-based violence and hardly mention eliminating wars as a goal. Wars create public health catastrophes. They kill, maim, displace and affect millions. Inter-state wars fuel intra-state conflicts and violence. The peace agenda should not be the monopoly of the UN Security Council, and the current consensus-building process setting the post-MDG agenda is a rallying point for the global community. The human rights approach will not suffice to eliminate wars, because few are fought to protect human rights. The development agenda should therefore commit to eliminating all wars by 2030. Targets to reduce tensions and discourage wars should be included. We should act now. © The Author(s) 2014.

  13. Developing a Ballistic Software Kit to Estimate Vehicle Characteristics at the Draft Design Stage

    Directory of Open Access Journals (Sweden)

    V. I. Maiorova

    2015-01-01

    Full Text Available The article describes a ballistic software kit to calculate a moving vehicle trajectory in atmosphere and space. Such software gives an opportunity to accelerate the acquisition of flying vehicle’s ballistic parameters at the stage of draft design. It contributes to improving collaboration efficiency between adjacent departments involved in the project. The developed software kit includes three different programs: Trajectory-LAND© (motion in atmosphere with possible correction of a trajectory, Trajectory-SPACE© (motion in the non-central gravity field with possible simulation of maneuvers, Trajectory-LAUNCH© (launch-vehicle’s insertion into the orbit with possible defining the impact points of separated stages. Each of the software concedes the addition of computational modules to use the solution results of the basic task. Implemented mathematical models permit to take into account the influence of main perturbations on the flying vehicle during the flight. For illustration purposes, the article gives some examples of using each of the programs and their block-diagrams.The developed software implements some algorithms, which allow attaining the convergence of numerical simulation of differential equations of motion. This problem arises, for example, while determining an attitude in case the stages have already separated from the launch vehicle. The mathematical conversion from Rodriguez-Hamilton parameters into Euler’s angles disables us to obtain reliable values of attitude angles due to the limitations for existing area of inverse trigonometric functions being used. Incorrect values of pitch lead to raw and roll channels divergences. Moreover, the mistakes in attitude determination lead to mistakes in obtained values of attack angle, which is included into the forms for aerodynamic forces and torques. As a result, the solution of system of differential equations is a failure when a flying vehicle enters the height of 30-35 km. The

  14. A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes

    Science.gov (United States)

    2013-01-01

    Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680

  15. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  16. Development and Application of a Sensitive, Second Antibody Format Enzymeimmunoassay (EIA) for Estimation of Plasma FSH in Mithun (Bos frontalis).

    Science.gov (United States)

    Mondal, Mohan; Baruah, Kishore Kumar; Prakash, B S

    2016-01-01

    Mithun (Bos frontalis) is a semi-wild rare ruminant species. A simple sensitive enzymeimmunoassay suitable for assaying FSH in the blood plasma of mithun is not available which thereby limits our ability to understand this species reproductive processes. Therefore, the aim of this article was to develop a simple and sensitive enzymeimmunoassay (EIA) for estimation of FSH in mithun plasma and apply the assay to understand the estrous cycle and superovulatory process in this species. To accomplish this goal, biotinylated FSH was bridged between streptavidin-peroxidase and immobilized antiserum in a competitive assay. Forty microlitre mithun plasma was used directly in the EIA. The FSH standards were prepared in hormone free plasma and ranged from 5-1280 pg/well/40 μL. The sensitivity of EIA was 5 pg/well FSH, which corresponds to 0.125 ng/mL plasma and the 50% relative binding sensitivity was 90 pg/well/40 μL. Although the shape of the standard curve was not influenced by different plasma volumes viz. 40 and 80 μL, a slight drop in the OD450 was observed with the increasing volume of plasma. Parallelism tests conducted between the endogenous mithun FSH and bovine FSH standards showed good homology between them. Plasma FSH estimated using the developed EIA and commercially available FSH EIA kit in the same samples were correlated (r = 0.98) and showed linearity. Both the Intra- and inter-assay CV were below 6%. Recovery of known concentrations of added FSH showed linearity (r = 0.99). The developed EIA was further validated biologically by estimating FSH in cyclic cows for the entire estrous cycle, in mithun heifers administered with GnRH analogues and in mithun cows during superovulatory treatment with FSH. In conclusion, the EIA developed for FSH determination in mithun blood plasma is simple and highly sensitive for estimation of mithun FSH in all physiological conditions.

  17. Development of estimation method for crop yield using MODIS satellite imagery data and process-based model for corn and soybean in US Corn-Belt region

    Science.gov (United States)

    Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.

    2012-12-01

    Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY

  18. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  19. Rapid estimation of split renal function in kidney donors using software developed for computed tomographic renal volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Fumi, E-mail: fumikato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kamishima, Tamotsu, E-mail: ktamotamo2@yahoo.co.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Morita, Ken, E-mail: kenordic@carrot.ocn.ne.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Muto, Natalia S., E-mail: nataliamuto@gmail.com [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Okamoto, Syozou, E-mail: shozo@med.hokudai.ac.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Omatsu, Tokuhiko, E-mail: omatoku@nirs.go.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Oyama, Noriko, E-mail: ZAT04404@nifty.ne.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Terae, Satoshi, E-mail: saterae@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan); Kanegae, Kakuko, E-mail: IZW00143@nifty.ne.jp [Department of Nuclear Medicine, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Nonomura, Katsuya, E-mail: k-nonno@med.hokudai.ac.jp [Department of Urology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, 060-8638 (Japan); Shirato, Hiroki, E-mail: shirato@med.hokudai.ac.jp [Department of Radiology, Hokkaido University Graduate School of Medicine, N15, W7, Kita-ku, Sapporo, Hokkaido 060-8638 (Japan)

    2011-07-15

    Purpose: To evaluate the speed and precision of split renal volume (SRV) measurement, which is the ratio of unilateral renal volume to bilateral renal volume, using a newly developed software for computed tomographic (CT) volumetry and to investigate the usefulness of SRV for the estimation of split renal function (SRF) in kidney donors. Method: Both dynamic CT and renal scintigraphy in 28 adult potential living renal donors were the subjects of this study. We calculated SRV using the newly developed volumetric software built into a PACS viewer (n-SRV), and compared it with SRV calculated using a conventional workstation, ZIOSOFT (z-SRV). The correlation with split renal function (SRF) using {sup 99m}Tc-DMSA scintigraphy was also investigated. Results: The time required for volumetry of bilateral kidneys with the newly developed software (16.7 {+-} 3.9 s) was significantly shorter than that of the workstation (102.6 {+-} 38.9 s, p < 0.0001). The results of n-SRV (49.7 {+-} 4.0%) were highly consistent with those of z-SRV (49.9 {+-} 3.6%), with a mean discrepancy of 0.12 {+-} 0.84%. The SRF also agreed well with the n-SRV, with a mean discrepancy of 0.25 {+-} 1.65%. The dominant side determined by SRF and n-SRV showed agreement in 26 of 28 cases (92.9%). Conclusion: The newly developed software for CT volumetry was more rapid than the conventional workstation volumetry and just as accurate, and was suggested to be useful for the estimation of SRF and thus the dominant side in kidney donors.

  20. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Melius, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  1. Human dental age estimation combining third molar(s) development and tooth morphological age predictors.

    Science.gov (United States)

    Thevissen, P W; Galiti, D; Willems, G

    2012-11-01

    In the subadult age group, third molar development, as well as age-related morphological tooth information can be observed on panoramic radiographs. The aim of present study was to combine, in subadults, panoramic radiographic data based on developmental stages of third molar(s) and morphological measurements from permanent teeth, in order to evaluate its added age-predicting performances. In the age range between 15 and 23 years, 25 gender-specific radiographs were collected within each age category of 1 year. Third molar development was classified and registered according the 10-point staging and scoring technique proposed by Gleiser and Hunt (1955), modified by Köhler (1994). The Kvaal (1995) measuring technique was applied on the indicated teeth from the individuals' left side. Linear regression models with age as response and third molar-scored stages as explanatory variables were developed, and morphological measurements from permanent teeth were added. From the models, determination coefficients (R (2)) and root-mean-square errors (RMSE) were calculated. Maximal-added age information was reported as a 6 % R² increase and a 0.10-year decrease of RMSE. Forensic dental age estimations on panoramic radiographic data in the subadult group (15-23 year) should only be based on third molar development.

  2. The Development of Symbolic and Non-Symbolic Number Line Estimations: Three Developmental Accounts Contrasted Within Cross-Sectional and Longitudinal Data

    Directory of Open Access Journals (Sweden)

    Delphine Sasanguie

    2016-12-01

    Full Text Available Three theoretical accounts have been put forward for the development of children’s response patterns on number line estimation tasks: the log-to-linear representational shift, the two-linear-to-linear transformation and the proportion judgment account. These three accounts have not been contrasted, however, within one study, using one single criterion to determine which model provides the best fit. The present study contrasted these three accounts by examining first, second and sixth graders with a symbolic and non-symbolic number line estimation task (Experiment 1. In addition, first and second graders were tested again one year later (Experiment 2. In case of symbolic estimations, the proportion judgment account described the data best. Most young children’s non-symbolic estimation patterns were best described by a logarithmic model (within the log-to-lin account, whereas those of most older children were best described by the simple power model (within the proportion judgment account.

  3. Development of simplified decommissioning cost estimation code for nuclear facilities

    International Nuclear Information System (INIS)

    Tachibana, Mitsuo; Shiraishi, Kunio; Ishigami, Tsutomu

    2010-01-01

    The simplified decommissioning cost estimation code for nuclear facilities (DECOST code) was developed in consideration of features and structures of nuclear facilities and similarity of dismantling methods. The DECOST code could calculate 8 evaluation items of decommissioning cost. Actual dismantling in the Japan Atomic Energy Agency (JAEA) was evaluated; unit conversion factors used to calculate the manpower of dismantling activities were evaluated. Consequently, unit conversion factors of general components could be classified into three kinds. Weights of components and structures of the facility were necessary for calculation of manpower. Methods for evaluating weights of components and structures of the facility were studied. Consequently, the weight of components in the facility was proportional to the weight of structures of the facility. The weight of structures of the facility was proportional to the total area of floors in the facility. Decommissioning costs of 7 nuclear facilities in the JAEA were calculated by using the DECOST code. To verify the calculated results, the calculated manpower was compared with the manpower gained from actual dismantling. Consequently, the calculated manpower and actual manpower were almost equal. The outline of the DECOST code, evaluation results of unit conversion factors, the evaluation method of the weights of components and structures of the facility are described in this report. (author)

  4. Development of LLW and VLLW disposal business cost estimation system

    International Nuclear Information System (INIS)

    Koibuchi, Hiroko; Ishiguro, Hideharu; Matsuda, Kenji

    2004-01-01

    In order to undertake the LLW and VLLW disposal business, various examinations are carried out in RANDEC. Since it is important in undertaking this business to secure funds, a disposal cost must be calculated by way of trial. However, at present, there are many unknown factors such as the amount of wastes, a disposal schedule, the location of a disposal site, and so on, and the cost cannot be determined. Meanwhile, the cost depends on complicated relations among these factors. Then, a 'LLW and VLLW disposal business cost estimation system' has been developed to calculate the disposal cost easily. This system can calculate an annual balance of payments by using a construction and operation cost of disposal facilities, considering economic parameters of tax, inflation rate, interest rate and so on. And the system can calculate internal reserves to assign to next-stage upkeep of the disposal facilities after the disposal operation. A model of disposal site was designed based on assumption of some preconditions and a study was carried out to make a trial calculation by using the system. Moreover, it will be required to reduce construction cost by rationalizing the facility and to make flat an annual business spending by examining the business schedule. (author)

  5. Estimating least-developed countries’ vulnerability to climate-related extreme events over the next 50 years

    Science.gov (United States)

    Patt, Anthony G.; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff

    2010-01-01

    When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries’ exposure to climatic extreme events, and socio-economic development trends that influence countries’ own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation. PMID:20080585

  6. Estimating least-developed countries' vulnerability to climate-related extreme events over the next 50 years.

    Science.gov (United States)

    Patt, Anthony G; Tadross, Mark; Nussbaumer, Patrick; Asante, Kwabena; Metzger, Marc; Rafael, Jose; Goujon, Anne; Brundrit, Geoff

    2010-01-26

    When will least developed countries be most vulnerable to climate change, given the influence of projected socio-economic development? The question is important, not least because current levels of international assistance to support adaptation lag more than an order of magnitude below what analysts estimate to be needed, and scaling up support could take many years. In this paper, we examine this question using an empirically derived model of human losses to climate-related extreme events, as an indicator of vulnerability and the need for adaptation assistance. We develop a set of 50-year scenarios for these losses in one country, Mozambique, using high-resolution climate projections, and then extend the results to a sample of 23 least-developed countries. Our approach takes into account both potential changes in countries' exposure to climatic extreme events, and socio-economic development trends that influence countries' own adaptive capacities. Our results suggest that the effects of socio-economic development trends may begin to offset rising climate exposure in the second quarter of the century, and that it is in the period between now and then that vulnerability will rise most quickly. This implies an urgency to the need for international assistance to finance adaptation.

  7. Subsoil exploration of the estimated building site for nuclear fuel development and fabrication facility

    Energy Technology Data Exchange (ETDEWEB)

    Song, In Taek [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-01-01

    The objective of this report, as the result of subsoil exploration, is to provide basic design data of structural plan for nuclear fuel development and fabrication facility that is builded on Duckjin 150, Yusong, Taejeon, Korea, and provide basic data for execution of work. The soft rock level of estimated building site is deep(18.0m:BH-1, 20.5m:BH-2, 25.5m:BH-3) and the hard rock level of it is very deep (33.0m:BH-1, 46.0m:BH-2, 34.5m:BH-3) , for structural design, the hard rock shall be the bottom of foundation. 9 figs., 19 tabs. (Author)

  8. Method for estimate the economic characteristics of an uranium enrichment plant by gaseous diffusion

    International Nuclear Information System (INIS)

    Berault, J.C.

    1975-01-01

    To estimate the economic characteristics of an uranium enrichment plant by gaseous diffusion is to determine the prospective price of the separative work unit to which leads the concerned technology, and to collect the data allowing to ascertain that this price remains in the area of development of the prices forecasted by the other projects. The prospective price estimated by the promoter is the synthesis of the components of the go decision and which are a potential market and a comprehensive industrially proven plant design, including the basic economic and technical data of the project. Procedures for estimating these components and their synthesis, exclusive of financing problems are reviewed [fr

  9. Cost-estimate and proposal for a development impact bond for canine rabies elimination by mass vaccination in Chad.

    Science.gov (United States)

    Anyiam, Franziska; Lechenne, Monique; Mindekem, Rolande; Oussigéré, Assandi; Naissengar, Service; Alfaroukh, Idriss Oumar; Mbilo, Celine; Moto, Daugla Doumagoum; Coleman, Paul G; Probst-Hensch, Nicole; Zinsstag, Jakob

    2017-11-01

    Close to 69,000 humans die of rabies each year, most of them in Africa and Asia. Clinical rabies can be prevented by post-exposure prophylaxis (PEP). However, PEP is commonly not available or not affordable in developing countries. Another strategy besides treating exposed humans is the vaccination of vector species. In developing countries, the main vector is the domestic dog, that, once infected, is a serious threat to humans. After a successful mass vaccination of 70% of the dogs in N'Djaména, we report here a cost-estimate for a national rabies elimination campaign for Chad. In a cross-sectional survey in four rural zones, we established the canine : human ratio at the household level. Based on human census data and the prevailing socio-cultural composition of rural zones of Chad, the total canine population was estimated at 1,205,361 dogs (95% Confidence interval 1,128,008-1,736,774 dogs). Cost data were collected from government sources and the recent canine mass vaccination campaign in N'Djaména. A Monte Carlo simulation was used for the simulation of the average cost and its variability, using probability distributions for dog numbers and cost items. Assuming the vaccination of 100 dogs on average per vaccination post and a duration of one year, the total cost for the vaccination of the national Chadian canine population is estimated at 2,716,359 Euros (95% CI 2,417,353-3,035,081) for one vaccination round. A development impact bond (DIB) organizational structure and cash flow scenario were then developed for the elimination of canine rabies in Chad. Cumulative discounted cost of 28.3 million Euros over ten years would be shared between the government of Chad, private investors and institutional donors as outcome funders. In this way, the risk of the investment could be shared and the necessary investment could be made available upfront - a key element for the elimination of canine rabies in Chad. Copyright © 2016 The Authors. Published by Elsevier B

  10. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  11. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  12. Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate.

    Science.gov (United States)

    Kerr, Zachary Y; Littleton, Ashley C; Cox, Leah M; DeFreese, J D; Varangis, Eleanna; Lynall, Robert C; Schmidt, Julianne D; Marshall, Stephen W; Guskiewicz, Kevin M

    2015-07-15

    Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n = 32 college football only, n = 32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p < 0.001). The HIEE measure was independent of concussion history (p = 0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects.

  13. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  14. Application of Observed Precipitation in NCEP Global and Regional Data Assimilation Systems, Including Reanalysis and Land Data Assimilation

    Science.gov (United States)

    Mitchell, K. E.

    2006-12-01

    The Environmental Modeling Center (EMC) of the National Centers for Environmental Prediction (NCEP) applies several different analyses of observed precipitation in both the data assimilation and validation components of NCEP's global and regional numerical weather and climate prediction/analysis systems (including in NCEP global and regional reanalysis). This invited talk will survey these data assimilation and validation applications and methodologies, as well as the temporal frequency, spatial domains, spatial resolution, data sources, data density and data quality control in the precipitation analyses that are applied. Some of the precipitation analyses applied by EMC are produced by NCEP's Climate Prediction Center (CPC), while others are produced by the River Forecast Centers (RFCs) of the National Weather Service (NWS), or by automated algorithms of the NWS WSR-88D Radar Product Generator (RPG). Depending on the specific type of application in data assimilation or model forecast validation, the temporal resolution of the precipitation analyses may be hourly, daily, or pentad (5-day) and the domain may be global, continental U.S. (CONUS), or Mexico. The data sources for precipitation include ground-based gauge observations, radar-based estimates, and satellite-based estimates. The precipitation analyses over the CONUS are analyses of either hourly, daily or monthly totals of precipitation, and they are of two distinct types: gauge-only or primarily radar-estimated. The gauge-only CONUS analysis of daily precipitation utilizes an orographic-adjustment technique (based on the well-known PRISM precipitation climatology of Oregon State University) developed by the NWS Office of Hydrologic Development (OHD). The primary NCEP global precipitation analysis is the pentad CPC Merged Analysis of Precipitation (CMAP), which blends both gauge observations and satellite estimates. The presentation will include a brief comparison between the CMAP analysis and other global

  15. Using the Estimating Supplies Program to Develop Materiel Solutions for the U.S. Air Force Aeromedical Evacuation In-Flight Kit (FFQDM)

    National Research Council Canada - National Science Library

    Hopkins, Curtis; Nix, Ralph; Pang, Gerry; Konoske, Paula

    2008-01-01

    ... NHRC's medical modeling tool the Estimating Supplies Program (ESP) for the development and management of Air Force medical Allowance Standards as a baseline for standardization throughout the services...

  16. Breast density estimation from high spectral and spatial resolution MRI

    Science.gov (United States)

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (pdensity estimations. An interclass correlation coefficient of 0.99 (pdensity estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  17. Estimating added sugars in US consumer packaged goods: An application to beverages in 2007-08.

    Science.gov (United States)

    Ng, Shu Wen; Bricker, Gregory; Li, Kuo-Ping; Yoon, Emily Ford; Kang, Jiyoung; Westrich, Brian

    2015-11-01

    This study developed a method to estimate added sugar content in consumer packaged goods (CPG) that can keep pace with the dynamic food system. A team including registered dietitians, a food scientist and programmers developed a batch-mode ingredient matching and linear programming (LP) approach to estimate the amount of each ingredient needed in a given product to produce a nutrient profile similar to that reported on its nutrition facts label (NFL). Added sugar content was estimated for 7021 products available in 2007-08 that contain sugar from ten beverage categories. Of these, flavored waters had the lowest added sugar amounts (4.3g/100g), while sweetened dairy and dairy alternative beverages had the smallest percentage of added sugars (65.6% of Total Sugars; 33.8% of Calories). Estimation validity was determined by comparing LP estimated values to NFL values, as well as in a small validation study. LP estimates appeared reasonable compared to NFL values for calories, carbohydrates and total sugars, and performed well in the validation test; however, further work is needed to obtain more definitive conclusions on the accuracy of added sugar estimates in CPGs. As nutrition labeling regulations evolve, this approach can be adapted to test for potential product-specific, category-level, and population-level implications.

  18. Test suite for image-based motion estimation of the brain and tongue

    Science.gov (United States)

    Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.

    2017-03-01

    Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that

  19. Estimation of sport fish harvest for risk and hazard assessment of environmental contaminants

    International Nuclear Information System (INIS)

    Poston, T.M.; Strenge, D.L.

    1989-01-01

    Consumption of contaminated fish flesh can be a significant route of human exposure to hazardous chemicals. Estimation of exposure resulting from the consumption of fish requires knowledge of fish consumption and contaminant levels in the edible portion of fish. Realistic figures of sport fish harvest are needed to estimate consumption. Estimates of freshwater sport fish harvest were developed from a review of 72 articles and reports. Descriptive statistics based on fishing pressure were derived from harvest data for four distinct groups of freshwater sport fish in three water types: streams, lakes, and reservoirs. Regression equations were developed to relate harvest to surface area fished where data bases were sufficiently large. Other aspects of estimating human exposure to contaminants in fish flesh that are discussed include use of bioaccumulation factors for trace metals and organic compounds. Using the bioaccumulation factor and the concentration of contaminants in water as variables in the exposure equation may also lead to less precise estimates of tissue concentration. For instance, muscle levels of contaminants may not increase proportionately with increases in water concentrations, leading to overestimation of risk. In addition, estimates of water concentration may be variable or expressed in a manner that does not truly represent biological availability of the contaminant. These factors are discussed. 45 refs., 1 fig., 7 tabs

  20. DNA copy number, including telomeres and mitochondria, assayed using next-generation sequencing

    Directory of Open Access Journals (Sweden)

    Jackson Stuart

    2010-04-01

    Full Text Available Abstract Background DNA copy number variations occur within populations and aberrations can cause disease. We sought to develop an improved lab-automatable, cost-efficient, accurate platform to profile DNA copy number. Results We developed a sequencing-based assay of nuclear, mitochondrial, and telomeric DNA copy number that draws on the unbiased nature of next-generation sequencing and incorporates techniques developed for RNA expression profiling. To demonstrate this platform, we assayed UMC-11 cells using 5 million 33 nt reads and found tremendous copy number variation, including regions of single and homogeneous deletions and amplifications to 29 copies; 5 times more mitochondria and 4 times less telomeric sequence than a pool of non-diseased, blood-derived DNA; and that UMC-11 was derived from a male individual. Conclusion The described assay outputs absolute copy number, outputs an error estimate (p-value, and is more accurate than array-based platforms at high copy number. The platform enables profiling of mitochondrial levels and telomeric length. The assay is lab-automatable and has a genomic resolution and cost that are tunable based on the number of sequence reads.