WorldWideScience

Sample records for programs estimation approach

  1. Value drivers: an approach for estimating health and disease management program savings.

    Science.gov (United States)

    Phillips, V L; Becker, Edmund R; Howard, David H

    2013-12-01

    Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.

  2. A dynamic programming approach to missing data estimation using neural networks

    CSIR Research Space (South Africa)

    Nelwamondo, FV

    2013-01-01

    Full Text Available method where dynamic programming is not used. This paper also suggests a different way of formulating a missing data problem such that the dynamic programming is applicable to estimate the missing data....

  3. A linear programming approach for placement of applicants to academic programs

    OpenAIRE

    Kassa, Biniyam Asmare

    2013-01-01

    This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the ...

  4. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  5. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  6. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    Science.gov (United States)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  7. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  8. A linear programming approach for placement of applicants to academic programs.

    Science.gov (United States)

    Kassa, Biniyam Asmare

    2013-01-01

    This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the new approach allows the college's management to easily incorporate additional placement criteria, if needed. Comparison of our approach against manually constructed placement decisions based on actual data for the 2012/13 academic year suggested that about 93 percent of the placements from our model concur with the actual placement decisions. For the remaining 7 percent of placements, however, the actual placements made by the manual system display inconsistencies of decisions judged against the very criteria intended to guide placement decisions by the college's program management office. Overall, the new approach proves to be a significant improvement over the manual system in terms of efficiency of the placement process and the quality of placement decisions.

  9. Sampling-based approaches to improve estimation of mortality among patient dropouts: experience from a large PEPFAR-funded program in Western Kenya.

    Directory of Open Access Journals (Sweden)

    Constantin T Yiannoutsos

    Full Text Available Monitoring and evaluation (M&E of HIV care and treatment programs is impacted by losses to follow-up (LTFU in the patient population. The severity of this effect is undeniable but its extent unknown. Tracing all lost patients addresses this but census methods are not feasible in programs involving rapid scale-up of HIV treatment in the developing world. Sampling-based approaches and statistical adjustment are the only scaleable methods permitting accurate estimation of M&E indices.In a large antiretroviral therapy (ART program in western Kenya, we assessed the impact of LTFU on estimating patient mortality among 8,977 adult clients of whom, 3,624 were LTFU. Overall, dropouts were more likely male (36.8% versus 33.7%; p = 0.003, and younger than non-dropouts (35.3 versus 35.7 years old; p = 0.020, with lower median CD4 count at enrollment (160 versus 189 cells/ml; p<0.001 and WHO stage 3-4 disease (47.5% versus 41.1%; p<0.001. Urban clinic clients were 75.0% of non-dropouts but 70.3% of dropouts (p<0.001. Of the 3,624 dropouts, 1,143 were sought and 621 had their vital status ascertained. Statistical techniques were used to adjust mortality estimates based on information obtained from located LTFU patients. Observed mortality estimates one year after enrollment were 1.7% (95% CI 1.3%-2.0%, revised to 2.8% (2.3%-3.1% when deaths discovered through outreach were added and adjusted to 9.2% (7.8%-10.6% and 9.9% (8.4%-11.5% through statistical modeling depending on the method used. The estimates 12 months after ART initiation were 1.7% (1.3%-2.2%, 3.4% (2.9%-4.0%, 10.5% (8.7%-12.3% and 10.7% (8.9%-12.6% respectively. CONCLUSIONS/SIGNIFICANCE ABSTRACT: Assessment of the impact of LTFU is critical in program M&E as estimated mortality based on passive monitoring may underestimate true mortality by up to 80%. This bias can be ameliorated by tracing a sample of dropouts and statistically adjust the mortality estimates to properly evaluate and guide large

  10. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  11. Estimated emission reductions from California's enhanced Smog Check program.

    Science.gov (United States)

    Singer, Brett C; Wenzel, Thomas P

    2003-06-01

    The U.S. Environmental Protection Agency requires that states evaluate the effectiveness of their vehicle emissions inspection and maintenance (I/M) programs. This study demonstrates an evaluation approach that estimates mass emission reductions over time and includes the effect of I/M on vehicle deterioration. It includes a quantitative assessment of benefits from pre-inspection maintenance and repairs and accounts for the selection bias effect that occurs when intermittent high emitters are tested. We report estimates of one-cycle emission benefits of California's Enhanced Smog Check program, ca. 1999. Program benefits equivalent to metric tons per day of prevented emissions were calculated with a "bottom-up" approach that combined average per vehicle reductions in mass emission rates (g/gal) with average per vehicle activity, resolved by model year. Accelerated simulation mode test data from the statewide vehicle information database (VID) and from roadside Smog Check testing were used to determine 2-yr emission profiles of vehicles passing through Smog Check and infer emission profiles that would occur without Smog Check. The number of vehicles participating in Smog Check was also determined from the VID. We estimate that in 1999 Smog Check reduced tailpipe emissions of HC, CO, and NO(x) by 97, 1690, and 81 t/d, respectively. These correspond to 26, 34, and 14% of the HC, CO, and NO(x) that would have been emitted by vehicles in the absence of Smog Check. These estimates are highly sensitive to assumptions about vehicle deterioration in the absence of Smog Check. Considering the estimated uncertainty in these assumptions yields a range for calculated benefits: 46-128 t/d of HC, 860-2200 t/d of CO, and 60-91 t/d of NO(x). Repair of vehicles that failed an initial, official Smog Check appears to be the most important mechanism of emission reductions, but pre-inspection maintenance and repair also contributed substantially. Benefits from removal of nonpassing

  12. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  13. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  14. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  15. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  16. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  17. Implementing corporate wellness programs: a business approach to program planning.

    Science.gov (United States)

    Helmer, D C; Dunn, L M; Eaton, K; Macedonio, C; Lubritz, L

    1995-11-01

    1. Support of key decision makers is critical to the successful implementation of a corporate wellness program. Therefore, the program implementation plan must be communicated in a format and language readily understood by business people. 2. A business approach to corporate wellness program planning provides a standardized way to communicate the implementation plan. 3. A business approach incorporates the program planning components in a format that ranges from general to specific. This approach allows for flexibility and responsiveness to changes in program planning. 4. Components of the business approach are the executive summary, purpose, background, ground rules, approach, requirements, scope of work, schedule, and financials.

  18. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  19. Survey of engineering computational methods and experimental programs for estimating supersonic missile aerodynamic characteristics

    Science.gov (United States)

    Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.

    1982-01-01

    This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.

  20. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  1. UA criterion for NPP safety estimation SALP program

    International Nuclear Information System (INIS)

    Gorynina, L.V.; Tishchenko, V.A.

    1992-01-01

    Adopted by NRC program SALR is considered. The program is intended for acquisition and estimation of data on the activities of forms having licences for NPP operation and (or) construction. The criteria for estimation and the mechanism for determination of the rating of the firm activity quality are discussed

  2. Indirect estimators in US federal programs

    CERN Document Server

    1996-01-01

    In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.

  3. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  4. Low-income DSM Programs: Methodological approach to determining the cost-effectiveness of coordinated partnerships

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M.A.; Hill, L.J.

    1994-05-01

    As governments at all levels become increasingly budget-conscious, expenditures on low-income, demand-side management (DSM) programs are being evaluated more on the basis of efficiency at the expense of equity considerations. Budgetary pressures have also caused government agencies to emphasize resource leveraging and coordination with electric and gas utilities as a means of sharing the expenses of low-income programs. The increased involvement of electric and gas utilities in coordinated low-income DSM programs, in turn, has resulted in greater emphasis on estimating program cost-effectiveness. The objective of this study is to develop a methodological approach to estimate the cost- effectiveness of coordinated low-income DSM programs, given the special features that distinguish these programs from other utility-operated DSM programs. The general approach used in this study was to (1) select six coordinated low-income DSM programs from among those currently operating across the United States, (2) examine the main features of these programs, and (3) determine the conceptual and pragmatic problems associated with estimating their cost-effectiveness. Three types of coordination between government and utility cosponsors were identified. At one extreme, local agencies operate {open_quotes}parallel{close_quotes} programs, each of which is fully funded by a single sponsor (e.g., one funded by the U.S. Department of Energy and the other by a utility). At the other extreme are highly {open_quotes}coupled{close_quotes} programs that capitalize on the unique capabilities and resources offered by each cosponsor. In these programs, agencies employ a combination of utility and government funds to deliver weatherization services as part of an integrated effort. In between are {open_quotes}supplemental{close_quotes} programs that utilize resources to supplement the agency`s government-funded weatherization, with no changes to the operation of that program.

  5. Avoided cost estimation and post-reform funding allocation for California's energy efficiency programs

    International Nuclear Information System (INIS)

    Baskette, C.; Horii, B.; Price, S.; Kollman, E.

    2006-01-01

    This paper summarizes the first comprehensive estimation of California's electricity avoided costs since the state reformed its electricity market. It describes avoided cost estimates that vary by time and location, thus facilitating targeted design, funding, and marketing of demand-side management (DSM) and energy efficiency (EE) programs that could not have occurred under the previous methodology of system average cost estimation. The approach, data, and results reflect two important market structure changes: (a) wholesale spot and forward markets now supply electricity commodities to load serving entities; and (b) the evolution of an emissions market that internalizes and prices some of the externalities of electricity generation. The paper also introduces the multiplier effect of a price reduction due to DSM/EE implementation on electricity bills of all consumers. It affirms that area- and time-specific avoided cost estimates can improve the allocation of the state's public funding for DSM/EE programs, a finding that could benefit other parts of North America (e.g. Ontario and New York), which have undergone electricity deregulation. (author)

  6. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  7. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  8. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  9. Development of computer program for estimating decommissioning cost - 59037

    International Nuclear Information System (INIS)

    Kim, Hak-Soo; Park, Jong-Kil

    2012-01-01

    The programs for estimating the decommissioning cost have been developed for many different purposes and applications. The estimation of decommissioning cost is required a large amount of data such as unit cost factors, plant area and its inventory, waste treatment, etc. These make it difficult to use manual calculation or typical spreadsheet software such as Microsoft Excel. The cost estimation for eventual decommissioning of nuclear power plants is a prerequisite for safe, timely and cost-effective decommissioning. To estimate the decommissioning cost more accurately and systematically, KHNP, Korea Hydro and Nuclear Power Co. Ltd, developed a decommissioning cost estimating computer program called 'DeCAT-Pro', which is Decommission-ing Cost Assessment Tool - Professional. (Hereinafter called 'DeCAT') This program allows users to easily assess the decommissioning cost with various decommissioning options. Also, this program provides detailed reporting for decommissioning funding requirements as well as providing detail project schedules, cash-flow, staffing plan and levels, and waste volumes by waste classifications and types. KHNP is planning to implement functions for estimating the plant inventory using 3-D technology and for classifying the conditions of radwaste disposal and transportation automatically. (authors)

  10. KERNELHR: A program for estimating animal home ranges

    Science.gov (United States)

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  11. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  12. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  13. Multigene Genetic Programming for Estimation of Elastic Modulus of Concrete

    Directory of Open Access Journals (Sweden)

    Alireza Mohammadi Bayazidi

    2014-01-01

    Full Text Available This paper presents a new multigene genetic programming (MGGP approach for estimation of elastic modulus of concrete. The MGGP technique models the elastic modulus behavior by integrating the capabilities of standard genetic programming and classical regression. The main aim is to derive precise relationships between the tangent elastic moduli of normal and high strength concrete and the corresponding compressive strength values. Another important contribution of this study is to develop a generalized prediction model for the elastic moduli of both normal and high strength concrete. Numerous concrete compressive strength test results are obtained from the literature to develop the models. A comprehensive comparative study is conducted to verify the performance of the models. The proposed models perform superior to the existing traditional models, as well as those derived using other powerful soft computing tools.

  14. EDIN0613P weight estimating program. [for launch vehicles

    Science.gov (United States)

    Hirsch, G. N.

    1976-01-01

    The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.

  15. The cost of crime to society: new crime-specific estimates for policy and program evaluation.

    Science.gov (United States)

    McCollister, Kathryn E; French, Michael T; Fang, Hai

    2010-04-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  16. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  17. Multi-pitch Estimation using Semidefinite Programming

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Vandenberghe, Lieven

    2017-01-01

    assuming a Nyquist sampled signal by adding an additional semidefinite constraint. We show that the proposed estimator has superior performance compared to state- of-the-art methods for separating two closely spaced fundamentals and approximately achieves the asymptotic Cramér-Rao lower bound.......Multi-pitch estimation concerns the problem of estimating the fundamental frequencies (pitches) and amplitudes/phases of multiple superimposed harmonic signals with application in music, speech, vibration analysis etc. In this paper we formulate a complex-valued multi-pitch estimator via...... a semidefinite programming representation of an atomic decomposition over a continuous dictionary of complex exponentials and extend this to real-valued data via a real semidefinite pro-ram with the same dimensions (i.e. half the size). We further impose a continuous frequency constraint naturally occurring from...

  18. Counting the cost: estimating the economic benefit of pedophile treatment programs.

    Science.gov (United States)

    Shanahan, M; Donato, R

    2001-04-01

    The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.

  19. MoisturEC: a new R program for moisture content estimation from electrical conductivity data

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D.; Werkema, Dale D.; Lane, John W.

    2018-01-01

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data‐analysis tools are needed to “translate” geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user‐friendly tools are required to fully capitalize on the potential of geophysical information for soil‐moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two‐ and three‐dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach.

  20. MODERN APPROACHES TO INTELLECTUAL PROPERTY COST ESTIMATION UNDER CRISIS CONDITIONS FROM CONSUMER QUALITY PRESERVATION VIEWPOINT

    Directory of Open Access Journals (Sweden)

    I. N. Alexandrov

    2011-01-01

    Full Text Available Various intellectual property (IP estimation approaches and innovations in this field are discussed. Problem situations and «bottlenecks» in the economic mechanism of transformation of innovations into useful products and services are defined. Main international IP evaluation methods are described, particular attention being paid to «Quick Inside» program defined as latest generation global expert system. IP income and expense evaluation methods used in domestic practice are discussed. Possibility of using the Black-Scholes optional model to estimate costs of non-material assets is studied.

  1. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  2. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  3. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  4. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  5. New approach in the evaluation of a fitness program at a worksite.

    Science.gov (United States)

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  6. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  7. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    Science.gov (United States)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  8. Which Introductory Programming Approach Is Most Suitable for Students: Procedural or Visual Programming?

    Science.gov (United States)

    Eid, Chaker; Millham, Richard

    2012-01-01

    In this paper, we discuss the visual programming approach to teaching introductory programming courses and then compare this approach with that of procedural programming. The involved cognitive levels of students, as beginning students are introduced to different types of programming concepts, are correlated to the learning processes of…

  9. MoisturEC: A New R Program for Moisture Content Estimation from Electrical Conductivity Data.

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D; Werkema, Dale; Lane, John W

    2018-03-06

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data-analysis tools are needed to "translate" geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user-friendly tools are required to fully capitalize on the potential of geophysical information for soil-moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two- and three-dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  10. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  11. Developmental Programming of Renal Function and Re-Programming Approaches.

    Science.gov (United States)

    Nüsken, Eva; Dötsch, Jörg; Weber, Lutz T; Nüsken, Kai-Dietrich

    2018-01-01

    Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to "re-program" renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin-angiotensin-aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated to human application. Early

  12. Estimating Supplies Program (ESP), Version 1.00, User's Guide

    National Research Council Canada - National Science Library

    Tropeano, Anne

    2000-01-01

    The Estimating Supplies Program (ESP) is an easy to use Windows(TM)-based software program for military medical providers, planners, and trainers that calculates the amount of supplies and equipment required to treat a patient stream...

  13. A conceptual approach to the estimation of societal willingness-to-pay for nuclear safety programs

    International Nuclear Information System (INIS)

    Pandey, M.D.; Nathwani, J.S.

    2003-01-01

    The design, refurbishment and future decommissioning of nuclear reactors are crucially concerned with reducing the risk of radiation exposure that can result in adverse health effects and potential loss of life. To address this concern, large financial investments have been made to ensure safety of operating nuclear power plants worldwide. The efficacy of the expenditures incurred to provide safety must be judged against the safety benefit to be gained from such investments. We have developed an approach that provides a defendable basis for making that judgement. If the costs of risk reduction are disproportionate to the safety benefits derived, then the expenditures are not optimal; in essence the societal resources are being diverted away from other critical areas such as health care, education and social services that also enhance the quality of life. Thus, the allocation of society's resources devoted to nuclear safety must be continually appraised in light of competing needs, because there is a limit on the resources that any society can devote to extend life. The purpose of the paper is to present a simple and methodical approach to assessing the benefits of nuclear safety programs and regulations. The paper presents the Life-Quality Index (LQI) as a tool for the assessment of risk reduction initiatives that would support the public interest and enhance both safety and the quality of life. The LQI is formulated as a utility function consistent with the principles of rational decision analysis. The LQI is applied to quantify the societal willingness-to-pay (SWTP) for safety measures enacted to reduce of the risk of potential exposures to ionising radiation. The proposed approach provides essential support to help improve the cost-benefit analysis of engineering safety programs and safety regulations.

  14. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  15. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  16. Evaluations of carbon fluxes estimated by top-down and bottom-up approaches

    Science.gov (United States)

    Murakami, K.; Sasai, T.; Kato, S.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.; Nasahara, K.; Matsunaga, T.

    2013-12-01

    There are two types of estimating carbon fluxes using satellite observation data, and these are referred to as top-down and bottom-up approaches. Many uncertainties are however still remain in these carbon flux estimations, because the true values of carbon flux are still unclear and estimations vary according to the type of the model (e.g. a transport model, a process based model) and input data. The CO2 fluxes in these approaches are estimated by using different satellite data such as the distribution of CO2 concentration in the top-down approach and the land cover information (e.g. leaf area, surface temperature) in the bottom-up approach. The satellite-based CO2 flux estimations with reduced uncertainty can be used efficiently for identifications of large emission area and carbon stocks of forest area. In this study, we evaluated the carbon flux estimates from two approaches by comparing with each other. The Greenhouse gases Observing SATellite (GOSAT) has been observing atmospheric CO2 concentrations since 2009. GOSAT L4A data product is the monthly CO2 flux estimations for 64 sub-continental regions and is estimated by using GOSAT FTS SWIR L2 XCO2 data and atmospheric tracer transport model. We used GOSAT L4A CO2 flux as top-down approach estimations and net ecosystem productions (NEP) estimated by the diagnostic type biosphere model BEAMS as bottom-up approach estimations. BEAMS NEP is only natural land CO2 flux, so we used GOSAT L4A CO2 flux after subtraction of anthropogenic CO2 emissions and oceanic CO2 flux. We compared with two approach in temperate north-east Asia region. This region is covered by grassland and crop land (about 60 %), forest (about 20 %) and bare ground (about 20 %). The temporal variation for one year period was indicated similar trends between two approaches. Furthermore we show the comparison of CO2 flux estimations in other sub-continental regions.

  17. Developmental Programming of Renal Function and Re-Programming Approaches

    Science.gov (United States)

    Nüsken, Eva; Dötsch, Jörg; Weber, Lutz T.; Nüsken, Kai-Dietrich

    2018-01-01

    Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to “re-program” renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin–angiotensin–aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated to human application

  18. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  19. A fuzzy compromise programming approach for the Black-Litterman portfolio selection model

    Directory of Open Access Journals (Sweden)

    Mohsen Gharakhani

    2013-01-01

    Full Text Available In this paper, we examine advanced optimization approach for portfolio problem introduced by Black and Litterman to consider the shortcomings of Markowitz standard Mean-Variance optimization. Black and Litterman propose a new approach to estimate asset return. They present a way to incorporate the investor’s views into asset pricing process. Since the investor’s view about future asset return is always subjective and imprecise, we can represent it by using fuzzy numbers and the resulting model is multi-objective linear programming. Therefore, the proposed model is analyzed through fuzzy compromise programming approach using appropriate membership function. For this purpose, we introduce the fuzzy ideal solution concept based on investor preference and indifference relationships using canonical representation of proposed fuzzy numbers by means of their correspondingα-cuts. A real world numerical example is presented in which MSCI (Morgan Stanley Capital International Index is chosen as the target index. The results are reported for a portfolio consisting of the six national indices. The performance of the proposed models is compared using several financial criteria.

  20. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  1. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  2. TETRA-COM: a comprehensive SPSS program for estimating the tetrachoric correlation.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2012-12-01

    We provide an SPSS program that implements descriptive and inferential procedures for estimating tetrachoric correlations. These procedures have two main purposes: (1) bivariate estimation in contingency tables and (2) constructing a correlation matrix to be used as input for factor analysis (in particular, the SPSS FACTOR procedure). In both cases, the program computes accurate point estimates, as well as standard errors and confidence intervals that are correct for any population value. For purpose (1), the program computes the contingency table together with five other measures of association. For purpose (2), the program checks the positive definiteness of the matrix, and if it is found not to be Gramian, performs a nonlinear smoothing procedure at the user's request. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  3. Developmental Programming of Renal Function and Re-Programming Approaches

    Directory of Open Access Journals (Sweden)

    Eva Nüsken

    2018-02-01

    Full Text Available Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to “re-program” renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin–angiotensin–aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated

  4. Cost estimate for a proposed GDF Suez LNG testing program

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.; Luketa, Anay Josephine; Nissen, Mark R.; Lopez, Carlos; Vermillion, Nancy; Hightower, Marion Michael

    2014-02-01

    At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire, and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.

  5. SECPOP90: Sector population, land fraction, and economic estimation program

    Energy Technology Data Exchange (ETDEWEB)

    Humphreys, S.L.; Rollstin, J.A.; Ridgely, J.N.

    1997-09-01

    In 1973 Mr. W. Athey of the Environmental Protection Agency wrote a computer program called SECPOP which calculated population estimates. Since that time, two things have changed which suggested the need for updating the original program - more recent population censuses and the widespread use of personal computers (PCs). The revised computer program uses the 1990 and 1992 Population Census information and runs on current PCs as {open_quotes}SECPOP90.{close_quotes} SECPOP90 consists of two parts: site and regional. The site provides population and economic data estimates for any location within the continental United States. Siting analysis is relatively fast running. The regional portion assesses site availability for different siting policy decisions; i.e., the impact of available sites given specific population density criteria within the continental United States. Regional analysis is slow. This report compares the SECPOP90 population estimates and the nuclear power reactor licensee-provided information. Although the source, and therefore the accuracy, of the licensee information is unknown, this comparison suggests SECPOP90 makes reasonable estimates. Given the total uncertainty in any current calculation of severe accidents, including the potential offsite consequences, the uncertainty within SECPOP90 population estimates is expected to be insignificant. 12 refs., 55 figs., 7 tabs.

  6. SECPOP90: Sector population, land fraction, and economic estimation program

    International Nuclear Information System (INIS)

    Humphreys, S.L.; Rollstin, J.A.; Ridgely, J.N.

    1997-09-01

    In 1973 Mr. W. Athey of the Environmental Protection Agency wrote a computer program called SECPOP which calculated population estimates. Since that time, two things have changed which suggested the need for updating the original program - more recent population censuses and the widespread use of personal computers (PCs). The revised computer program uses the 1990 and 1992 Population Census information and runs on current PCs as open-quotes SECPOP90.close quotes SECPOP90 consists of two parts: site and regional. The site provides population and economic data estimates for any location within the continental United States. Siting analysis is relatively fast running. The regional portion assesses site availability for different siting policy decisions; i.e., the impact of available sites given specific population density criteria within the continental United States. Regional analysis is slow. This report compares the SECPOP90 population estimates and the nuclear power reactor licensee-provided information. Although the source, and therefore the accuracy, of the licensee information is unknown, this comparison suggests SECPOP90 makes reasonable estimates. Given the total uncertainty in any current calculation of severe accidents, including the potential offsite consequences, the uncertainty within SECPOP90 population estimates is expected to be insignificant. 12 refs., 55 figs., 7 tabs

  7. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  8. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  9. A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Ramiyani, Sara Saeidi; Sarvar, Rahim; Moud, Hashem Izadi; Mousavi, Seyyed Mohammad

    2013-01-01

    This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.

  10. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  11. Appendix C: Biomass Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  12. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  13. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  14. Beginning Java programming the object-oriented approach

    CERN Document Server

    Baesens, Bart; vanden Broucke, Seppe

    2015-01-01

    A comprehensive Java guide, with samples, exercises, case studies, and step-by-step instruction Beginning Java Programming: The Object Oriented Approach is a straightforward resource for getting started with one of the world's most enduringly popular programming languages. Based on classes taught by the authors, the book starts with the basics and gradually builds into more advanced concepts. The approach utilizes an integrated development environment that allows readers to immediately apply what they learn, and includes step-by-step instruction with plenty of sample programs. Each chapter c

  15. A reliability program approach to operational safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques is being formulated for potential application in the nuclear power industry. Methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed and a review of current nuclear risk-dominant issues conducted. The need for a reliability approach to address dependent system failures, operating and emergency procedures and human performance, and develop a plant-specific performance data base for safety decision making is demonstrated. Current research has concentrated on developing a Reliability Program approach for the operating phase of a nuclear plant's lifecycle. The approach incorporates performance monitoring and evaluation activities with dedicated tasks that integrate these activities with operation, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the Reliability Program. (orig./HP)

  16. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  17. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  18. Review of Evaluation, Measurement and Verification Approaches Used to Estimate the Load Impacts and Effectiveness of Energy Efficiency Programs

    Energy Technology Data Exchange (ETDEWEB)

    Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.

    2010-04-14

    Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy

  19. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  20. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  1. Airline loyalty (programs) across borders : A geographic discontinuity approach

    NARCIS (Netherlands)

    de Jong, Gerben; Behrens, Christiaan; van Ommeren, Jos

    2018-01-01

    We analyze brand loyalty advantages of national airlines in their domestic countries using geocoded data from a major international frequent flier program. We employ a geographic discontinuity design that estimates discontinuities in program activity at the national borders of the program's

  2. A novel machine learning approach for estimation of electricity demand: An empirical evidence from Thailand

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Mostafavi, Seyyed Iman; Jaafari, Arefeh; Hosseinpour, Fariba

    2013-01-01

    Highlights: • A hybrid approach is presented for the estimation of the electricity demand. • The proposed method integrates the capabilities of GP and SA. • The GSA model makes accurate predictions of the electricity demand. - Abstract: This study proposes an innovative hybrid approach for the estimation of the long-term electricity demand. A new prediction equation was developed for the electricity demand using an integrated search method of genetic programming and simulated annealing, called GSA. The annual electricity demand was formulated in terms of population, gross domestic product (GDP), stock index, and total revenue from exporting industrial products of the same year. A comprehensive database containing total electricity demand in Thailand from 1986 to 2009 was used to develop the model. The generalization of the model was verified using a separate testing data. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the electricity demand. The GSA model provides accurate predictions of the electricity demand. Furthermore, the proposed model outperforms a regression and artificial neural network-based models

  3. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  4. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    Science.gov (United States)

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this

  5. Minimum variance linear unbiased estimators of loss and inventory

    International Nuclear Information System (INIS)

    Stewart, K.B.

    1977-01-01

    The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs

  6. Estimating the Impact of Low-Income Universal Service Programs

    OpenAIRE

    Daniel A. Ackerberg; David R. DeRemer; Michael H. Riordan; Gregory L. Rosston; Bradley S. Wimmer

    2013-01-01

    This policy study uses U.S. Census microdata to evaluate how subsidies for universal telephone service vary in their impact across low-income racial groups, gender, age, and home ownership. Our demand specification includes both the subsidized monthly price (Lifeline program) and the subsidized initial connection price (Linkup program) for local telephone service. Our quasi-maximum likelihood estimation controls for location differences and instruments for price endogeneity. The microdata all...

  7. Appendix E: Wind Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  8. Appendix G: Building Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  9. Demonstrating a small utility approach to demand-side program implementation

    International Nuclear Information System (INIS)

    1991-01-01

    The US DOE awarded a grant to the Burlington Electric Department (B.E.D.) to test a demand-side management (DSM) demonstration program designed to quickly save a significant amount of power with little disruption to the utility's customers or its normal operations. B.E.D. is a small municipal utility located in northern Vermont, with a lengthy history of successful DSM involvement. In our grant application, we proposed to develop a replicable program and approach to DSM that might be useful to other small utilities and to write a report to enable such replication. We believe that this DSM program and/or individual program components are replicable. This report is designed to allow other utilities interested in DSM to replicate this program or specific program design features to meet their DSM goals. We also wanted to use the opportunity of this grant to test the waters of residential heating fuel-switching. We hoped to test the application of one fuel-switching technology, and to benefit from the lessons learned in developing a full-scale DSM program for this end- use. To this end the pilot effort has been very successful. In the pilot pressure we installed direct-vent gas fired space heaters sized as supplemental heating units in 44 residences heated solely by electric resistance heat. We installed the gas space heating units at no cost to the owners or residents. We surveyed participating customers. The results of those surveys are included in this report and preliminary estimates of winter peak capacity load reductions are also noted in this report

  10. Demonstrating a small utility approach to demand-side program implementation

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-01

    The US DOE awarded a grant to the Burlington Electric Department (B.E.D.) to test a demand-side management (DSM) demonstration program designed to quickly save a significant amount of power with little disruption to the utility's customers or its normal operations. B.E.D. is a small municipal utility located in northern Vermont, with a lengthy history of successful DSM involvement. In our grant application, we proposed to develop a replicable program and approach to DSM that might be useful to other small utilities and to write a report to enable such replication. We believe that this DSM program and/or individual program components are replicable. This report is designed to allow other utilities interested in DSM to replicate this program or specific program design features to meet their DSM goals. We also wanted to use the opportunity of this grant to test the waters of residential heating fuel-switching. We hoped to test the application of one fuel-switching technology, and to benefit from the lessons learned in developing a full-scale DSM program for this end- use. To this end the pilot effort has been very successful. In the pilot pressure we installed direct-vent gas fired space heaters sized as supplemental heating units in 44 residences heated solely by electric resistance heat. We installed the gas space heating units at no cost to the owners or residents. We surveyed participating customers. The results of those surveys are included in this report and preliminary estimates of winter peak capacity load reductions are also noted in this report.

  11. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  12. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  13. School District Program Cost Accounting: An Alternative Approach

    Science.gov (United States)

    Hentschke, Guilbert C.

    1975-01-01

    Discusses the value for school districts of a program cost accounting system and examines different approaches to generating program cost data, with particular emphasis on the "cost allocation to program system" (CAPS) and the traditional "transaction-based system." (JG)

  14. Estimating the Population-Level Effectiveness of Vaccination Programs in the Netherlands.

    NARCIS (Netherlands)

    van Wijhe, Maarten; McDonald, Scott A; de Melker, Hester E; Postma, Maarten J; Wallinga, Jacco

    There are few estimates of the effectiveness of long-standing vaccination programs in developed countries. To fill this gap, we investigate the direct and indirect effectiveness of childhood vaccination programs on mortality at the population level in the Netherlands.

  15. Rethink, Reform, Reenter: An Entrepreneurial Approach to Prison Programming.

    Science.gov (United States)

    Keena, Linda; Simmons, Chris

    2015-07-01

    The purpose of this article was to present a description and first-stage evaluation of the impact of the Ice House Entrepreneurship Program on the learning experience of participating prerelease inmates at a Mississippi maximum-security prison and their perception of the transfer of skills learned in program into securing employment upon reentry. The Ice House Entrepreneurship Program is a 12-week program facilitated by volunteer university professors to inmates in a prerelease unit of a maximum-security prison in Mississippi. Participants' perspectives were examined through content analysis of inmates' answers to program Reflection and Response Assignments and interviews. The analyses were conducted according to the constant comparative method. Findings reveal the emergent of eight life-lessons and suggest that this is a promising approach to prison programming for prerelease inmates. This study discusses three approaches to better prepare inmates for a mindset change. The rethink, reform, and reenter approaches help break the traditional cycle of release, reoffend, and return. © The Author(s) 2014.

  16. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  17. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  18. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    Science.gov (United States)

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  19. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  20. Comparing approaches to generic programming in Haskell

    NARCIS (Netherlands)

    Hinze, R.; Jeuring, J.T.; Löh, A.

    2006-01-01

    The last decade has seen a number of approaches to generic programming: PolyP, Functorial ML, `Scrap Your Boilerplate', Generic Haskell, `Generics for the Masses', etc. The approaches vary in sophistication and target audience: some propose full-blown pro- gramming languages, some suggest

  1. A linear programming approach for estimating the structure of a sparse linear genetic network from transcript profiling data

    Directory of Open Access Journals (Sweden)

    Chandra Nagasuma R

    2009-02-01

    Full Text Available Abstract Background A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN from transcript profiling data. Results The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting problem and solved finally by formulating a Linear Program (LP. A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known

  2. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  3. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  4. HEDPIN: a computer program to estimate pinwise power density

    International Nuclear Information System (INIS)

    Cappiello, M.W.

    1976-05-01

    A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given

  5. Comparing approaches to generic programming in Haskell

    NARCIS (Netherlands)

    Hinze, R.; Jeuring, J.T.; Löh, A.

    2006-01-01

    The last decade has seen a number of approaches to data- type-generic programming: PolyP, Functorial ML, `Scrap Your Boiler- plate', Generic Haskell, `Generics for the Masses', etc. The approaches vary in sophistication and target audience: some propose full-blown pro- gramming languages, some

  6. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    Science.gov (United States)

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  7. Development of a Portfolio Management Approach with Case Study of the NASA Airspace Systems Program

    Science.gov (United States)

    Neitzke, Kurt W.; Hartman, Christopher L.

    2012-01-01

    A portfolio management approach was developed for the National Aeronautics and Space Administration s (NASA s) Airspace Systems Program (ASP). The purpose was to help inform ASP leadership regarding future investment decisions related to its existing portfolio of advanced technology concepts and capabilities (C/Cs) currently under development and to potentially identify new opportunities. The portfolio management approach is general in form and is extensible to other advanced technology development programs. It focuses on individual C/Cs and consists of three parts: 1) concept of operations (con-ops) development, 2) safety impact assessment, and 3) benefit-cost-risk (B-C-R) assessment. The first two parts are recommendations to ASP leaders and will be discussed only briefly, while the B-C-R part relates to the development of an assessment capability and will be discussed in greater detail. The B-C-R assessment capability enables estimation of the relative value of each C/C as compared with all other C/Cs in the ASP portfolio. Value is expressed in terms of a composite weighted utility function (WUF) rating, based on estimated benefits, costs, and risks. Benefit utility is estimated relative to achieving key NAS performance objectives, which are outlined in the ASP Strategic Plan.1 Risk utility focuses on C/C development and implementation risk, while cost utility focuses on the development and implementation portions of overall C/C life-cycle costs. Initial composite ratings of the ASP C/Cs were successfully generated; however, the limited availability of B-C-R information, which is used as inputs to the WUF model, reduced the meaningfulness of these initial investment ratings. Development of this approach, however, defined specific information-generation requirements for ASP C/C developers that will increase the meaningfulness of future B-C-R ratings.

  8. The QUELCE Method: Using Change Drivers to Estimate Program Costs

    Science.gov (United States)

    2016-08-01

    Analysis 4 2.4 Assign Conditional Probabilities 5 2.5 Apply Uncertainty to Cost Formula Inputs for Scenarios 5 2.6 Perform Monte Carlo Simulation to...Distribution Statement A: Approved for Public Release; Distribution is Unlimited 1 Introduction: The Cost Estimation Challenge Because large-scale programs... challenged [Bliss 2012]. Improvements in cost estimation that would make these assumptions more precise and reduce early lifecycle uncertainty can

  9. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  10. An Approach for Solving Linear Fractional Programming Problems

    OpenAIRE

    Andrew Oyakhobo Odior

    2012-01-01

    Linear fractional programming problems are useful tools in production planning, financial and corporate planning, health care and hospital planning and as such have attracted considerable research interest. The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebr...

  11. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  12. A Novel Approach for Solving Semidefinite Programs

    Directory of Open Access Journals (Sweden)

    Hong-Wei Jiao

    2014-01-01

    Full Text Available A novel linearizing alternating direction augmented Lagrangian approach is proposed for effectively solving semidefinite programs (SDP. For every iteration, by fixing the other variables, the proposed approach alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. In addition, the proposed approach renews all the variables in closed forms without solving any system of linear equations. Global convergence of the proposed approach is proved under mild conditions, and two numerical problems are given to demonstrate the effectiveness of the presented approach.

  13. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  14. Designing programs to improve diets for maternal and child health: estimating costs and potential dietary impacts of nutrition-sensitive programs in Ethiopia, Nigeria, and India.

    Science.gov (United States)

    Masters, William A; Rosettie, Katherine L; Kranz, Sarah; Danaei, Goodarz; Webb, Patrick; Mozaffarian, Dariush

    2018-05-01

    Improving maternal and child nutrition in resource-poor settings requires effective use of limited resources, but priority-setting is constrained by limited information about program costs and impacts, especially for interventions designed to improve diet quality. This study utilized a mixed methods approach to identify, describe and estimate the potential costs and impacts on child dietary intake of 12 nutrition-sensitive programs in Ethiopia, Nigeria and India. These potential interventions included conditional livestock and cash transfers, media and education, complementary food processing and sales, household production and food pricing programs. Components and costs of each program were identified through a novel participatory process of expert regional consultation followed by validation and calibration from literature searches and comparison with actual budgets. Impacts on child diets were determined by estimating of the magnitude of economic mechanisms for dietary change, comprehensive reviews of evaluations and effectiveness for similar programs, and demographic data on each country. Across the 12 programs, total cost per child reached (net present value, purchasing power parity adjusted) ranged very widely: from 0.58 to 2650 USD/year among five programs in Ethiopia; 2.62 to 1919 USD/year among four programs in Nigeria; and 27 to 586 USD/year among three programs in India. When impacts were assessed, the largest dietary improvements were for iron and zinc intakes from a complementary food production program in Ethiopia (increases of 17.7 mg iron/child/day and 7.4 mg zinc/child/day), vitamin A intake from a household animal and horticulture production program in Nigeria (335 RAE/child/day), and animal protein intake from a complementary food processing program in Nigeria (20.0 g/child/day). These results add substantial value to the limited literature on the costs and dietary impacts of nutrition-sensitive interventions targeting children in resource

  15. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  16. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    Science.gov (United States)

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  17. Estimation of net greenhouse gas balance using crop- and soil-based approaches: Two case studies

    International Nuclear Information System (INIS)

    Huang, Jianxiong; Chen, Yuanquan; Sui, Peng; Gao, Wansheng

    2013-01-01

    The net greenhouse gas balance (NGHGB), estimated by combining direct and indirect greenhouse gas (GHG) emissions, can reveal whether an agricultural system is a sink or source of GHGs. Currently, two types of methods, referred to here as crop-based and soil-based approaches, are widely used to estimate the NGHGB of agricultural systems on annual and seasonal crop timescales. However, the two approaches may produce contradictory results, and few studies have tested which approach is more reliable. In this study, we examined the two approaches using experimental data from an intercropping trial with straw removal and a tillage trial with straw return. The results of the two approaches provided different views of the two trials. In the intercropping trial, NGHGB estimated by the crop-based approach indicated that monocultured maize (M) was a source of GHGs (− 1315 kg CO 2 −eq ha −1 ), whereas maize–soybean intercropping (MS) was a sink (107 kg CO 2 −eq ha −1 ). When estimated by the soil-based approach, both cropping systems were sources (− 3410 for M and − 2638 kg CO 2 −eq ha −1 for MS). In the tillage trial, mouldboard ploughing (MP) and rotary tillage (RT) mitigated GHG emissions by 22,451 and 21,500 kg CO 2 −eq ha −1 , respectively, as estimated by the crop-based approach. However, by the soil-based approach, both tillage methods were sources of GHGs: − 3533 for MP and − 2241 kg CO 2 −eq ha −1 for RT. The crop-based approach calculates a GHG sink on the basis of the returned crop biomass (and other organic matter input) and estimates considerably more GHG mitigation potential than that calculated from the variations in soil organic carbon storage by the soil-based approach. These results indicate that the crop-based approach estimates higher GHG mitigation benefits compared to the soil-based approach and may overestimate the potential of GHG mitigation in agricultural systems. - Highlights: • Net greenhouse gas balance (NGHGB) of

  18. 75 FR 16120 - Notice of Issuance of Exposure Draft on Accrual Estimates for Grant Programs

    Science.gov (United States)

    2010-03-31

    ... FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Notice of Issuance of Exposure Draft on Accrual Estimates for Grant Programs AGENCY: Federal Accounting Standards Advisory Board. ACTION: Notice. Board... Accounting Technical Release entitled Accrual Estimates for Grant Programs. The proposed Technical Release...

  19. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  20. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  1. Top-down and bottom-up approaches for cost estimating new reactor designs

    International Nuclear Information System (INIS)

    Berbey, P.; Gautier, G.M.; Duflo, D.; Rouyer, J.L.

    2007-01-01

    For several years, Generation-4 designs will be 'pre-conceptual' for the less mature concepts and 'preliminary' for the more mature concepts. In this situation, appropriate data for some of the plant systems may be lacking to develop a bottom-up cost estimate. Therefore, a more global approach, the Top-Down Approach (TDA), is needed to help the designers and decision makers in comparing design options. It utilizes more or less simple models for cost estimating the different parts of a design. TDA cost estimating effort applies to a whole functional element whose cost is approached by similar estimations coming from existing data, ratios and models, for a given range of variation of parameters. Modeling is used when direct analogy is not possible. There are two types of models, global and specific ones. Global models are applied to cost modules related to Code Of Account. Exponential formulae such as Ci = Ai + (Bi x Pi n ) are used when there are cost data for comparable modules in nuclear or other industries. Specific cost models are developed for major specific components of the plant: - process equipment such as reactor vessel, steam generators or large heat exchangers. - buildings, with formulae estimating the construction cost from base cost of m3 of building volume. - systems, when unit costs, cost ratios and models are used, depending on the level of detail of the design. Bottom Up Approach (BUA), which is based on unit prices coming from similar equipment or from manufacturer consulting, is very valuable and gives better cost estimations than TDA when it can be applied, that is at a rather late stage of the design. Both approaches are complementary when some parts of the design are detailed enough to be estimated by BUA, and when BUA results are used to check TDA results and to improve TDA models. This methodology is applied to the HTR (High Temperature Reactor) concept and to an advanced PWR design

  2. A stump-to-mill timber production cost-estimating program for cable logging eastern hardwoods

    Science.gov (United States)

    Chris B. LeDoux

    1987-01-01

    ECOST utilizes data from stand inventory, cruise data, and the logging plan for the tract in question. The program produces detailed stump-to-mill cost estimates for specific proposed timber sales. These estimates are then utilized, in combination with specific landowner objectives, to assess the economic feasibility of cable logging a given area. The program output is...

  3. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  4. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  5. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  6. Fuzzy Multi-objective Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    Amna Rehmat

    2007-07-01

    Full Text Available Traveling salesman problem (TSP is one of the challenging real-life problems, attracting researchers of many fields including Artificial Intelligence, Operations Research, and Algorithm Design and Analysis. The problem has been well studied till now under different headings and has been solved with different approaches including genetic algorithms and linear programming. Conventional linear programming is designed to deal with crisp parameters, but information about real life systems is often available in the form of vague descriptions. Fuzzy methods are designed to handle vague terms, and are most suited to finding optimal solutions to problems with vague parameters. Fuzzy multi-objective linear programming, an amalgamation of fuzzy logic and multi-objective linear programming, deals with flexible aspiration levels or goals and fuzzy constraints with acceptable deviations. In this paper, a methodology, for solving a TSP with imprecise parameters, is deployed using fuzzy multi-objective linear programming. An example of TSP with multiple objectives and vague parameters is discussed.

  7. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  8. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    Science.gov (United States)

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  9. Two Approaches for Estimating Discharge on Ungauged Basins in Oregon, USA

    Science.gov (United States)

    Wigington, P. J.; Leibowitz, S. G.; Comeleo, R. L.; Ebersole, J. L.; Copeland, E. A.

    2009-12-01

    Detailed information on the hydrologic behavior of streams is available for only a small proportion of all streams. Even in cases where discharge has been monitored, these measurements may not be available for a sufficiently long period to characterize the full behavior of a stream. In this presentation, we discuss two separate approaches for predicting discharge at ungauged locations. The first approach models discharge in the Calapooia Watershed, Oregon based on long-term US Geological Survey gauge stations located in two adjacent watersheds. Since late 2008, we have measured discharge and water level over a range of flow conditions at more than a dozen sites within the Calapooia. Initial results indicate that many of these sites, including the mainstem Calapooia and some of its tributaries, can be predicted by these outside gauge stations and simple landscape factors. This is not a true “ungauged” approach, since measurements are required to characterize the range of flow. However, the approach demonstrates how such measurements and more complete data from similar areas can be used to estimate a detailed record for a longer period. The second approach estimates 30 year average monthly discharge at ungauged locations based on a Hydrologic Landscape Region (HLR) model. We mapped HLR class over the entire state of Oregon using an assessment unit with an average size of 44 km2. We then calculated average statewide moisture surplus values for each HLR class, modified to account for snowpack accumulation and snowmelt. We calculated potential discharge by summing these values for each HLR within a watershed. The resulting monthly hydrograph is then transformed to estimate monthly discharge, based on aquifer and soil permeability and terrain. We hypothesize that these monthly values should provide good estimates of discharge in areas where imports from or exports to the deep groundwater system are not significant. We test the approach by comparing results with

  10. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  11. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  12. Best-estimate analysis development for BWR systems

    International Nuclear Information System (INIS)

    Sutherland, W.A.; Alamgir, M.; Kalra, S.P.; Beckner, W.D.

    1986-01-01

    The Full Integral Simulation Test (FIST) Program is a three pronged approach to the development of best-estimate analysis capability for BWR systems. An experimental program in the FIST BWR system simulator facility extends the LOCA data base and adds operational transients data. An analytical method development program with the BWR-TRAC computer program extends the modeling of BWR specific components and major interfacing systems, and improves numerical techniques to reduce computer running time. A method qualification program tests TRAC-B against experiments run in the FIST facility and extends the results to reactor system applications. With the completion and integration of these three activities, the objective of a best-estimate analysis capability has been achieved. (author)

  13. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  14. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    Science.gov (United States)

    Nichols, James D.; Hines, James E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  15. On Algebraic Approach for MSD Parametric Estimation

    OpenAIRE

    Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

    2011-01-01

    This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

  16. A novel approach based on preference-based index for interval bilevel linear programming problem.

    Science.gov (United States)

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  17. A novel approach based on preference-based index for interval bilevel linear programming problem

    Directory of Open Access Journals (Sweden)

    Aihong Ren

    2017-05-01

    Full Text Available Abstract This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation ⪯ m w $\\preceq_{mw}$ . Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  18. A comprehensive approach to RCM-based preventive maintenance program development

    International Nuclear Information System (INIS)

    Hall, B.E.; Davis, T.; Pennington, A.J.

    1988-01-01

    In late 1986, Public Service Electric and Gas Company (PSE ampersand G) concluded that to support its vision and strategic planning it would be necessary to develop a consistent approach to maintenance for all nuclear units at the artificial island. General Physics Corporation was selected to lead a consultant team to support full-scale development of a preventive maintenance (PM) program for Salem and Hope Creek generating stations based on a reliability-centered maintenance (RCM) approach. RCM was selected because it represents a systematic approach to developing a PM program that provides a logical, consistent, and traceable methodology and produces a well-documented engineering basis for the program. Early in 1987, primary objectives for the PM program were defined. The Phase I tasks addressed key programmatic areas such as maintenance philosophy, procedures, condition monitoring, performance trending, equipment failure data base, ogranization, PM program effectiveness evaluation, RCM process, reliability/availability modeling, information management, training, spare parts, software/hardware, and commitments. Phase I of the PM program development project was completed in January 1988. Highlights of the Phase I work and the PM program manual are described

  19. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  20. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    Science.gov (United States)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  1. Update to the Fissile Materials Disposition program SST/SGT transportation estimation

    International Nuclear Information System (INIS)

    John Didlake

    1999-01-01

    This report is an update to ''Fissile Materials Disposition Program SST/SGT Transportation Estimation,'' SAND98-8244, June 1998. The Department of Energy Office of Fissile Materials Disposition requested this update as a basis for providing the public with an updated estimation of the number of transportation loads, load miles, and costs associated with the preferred alternative in the Surplus Plutonium Disposition Final Environmental Impact Statement (EIS)

  2. Deemed Savings Estimates for Legacy Air Conditioning and WaterHeating Direct Load Control Programs in PJM Region

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles

    2007-03-01

    During 2005 and 2006, the PJM Interconnection (PJM) Load Analysis Subcommittee (LAS) examined ways to reduce the costs and improve the effectiveness of its existing measurement and verification (M&V) protocols for Direct Load Control (DLC) programs. The current M&V protocol requires that a PURPA-compliant Load Research study be conducted every five years for each Load-Serving Entity (LSE). The current M&V protocol is expensive to implement and administer particularly for mature load control programs, some of which are marginally cost-effective. There was growing evidence that some LSEs were mothballing or dropping their DLC programs in lieu of incurring the expense associated with the M&V. This project had several objectives: (1) examine the potential for developing deemed savings estimates acceptable to PJM for legacy air conditioning and water heating DLC programs, and (2) explore the development of a collaborative, regional, consensus-based approach for conducting monitoring and verification of load reductions for emerging load management technologies for customers that do not have interval metering capability.

  3. Mathematical-programming approaches to test item pool design

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; van der Linden, Willem J.; Ariel, A.

    2002-01-01

    This paper presents an approach to item pool design that has the potential to improve on the quality of current item pools in educational and psychological testing andhence to increase both measurement precision and validity. The approach consists of the application of mathematical programming

  4. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  5. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  6. Appendix J: Weatherization and Intergovernmental Program (WIP) inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  7. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  8. Myth 8: The "Patch-On" Approach to Programming Is Effective

    Science.gov (United States)

    Tomlinson, Carol Ann

    2009-01-01

    It is not likely that any group of educators of the gifted ever sat around a table and came to the decision that a "patch-on" approach to programming for bright learners represented best practice. Nonetheless, it is as common today as 25 years ago that programming for students identified as gifted often represents such an approach. Patch-on…

  9. Approaches to estimating the universe of natural history collections data

    Directory of Open Access Journals (Sweden)

    Arturo H. Ariño

    2010-10-01

    Full Text Available This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units might remain to be mobilized. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

  10. Estimation dose in patients of nuclear medicine. Implementation of a calculi program and methodology

    International Nuclear Information System (INIS)

    Prieto, C.; Espana, M.L.; Tomasi, L.; Lopez Franco, P.

    1998-01-01

    Our hospital is developing a nuclear medicine quality assurance program in order to comply with medical exposure Directive 97/43 EURATOM and the legal requirements established in our legislation. This program includes the quality control of equipment and, in addition, the dose estimation in patients undergoing nuclear medicine examinations. This paper is focused in the second aspect, and presents a new computer program, developed in our Department, in order to estimate the absorbed dose in different organs and the effective dose to the patients, based upon the data from the ICRP publication 53 and its addendum. (Author) 16 refs

  11. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  12. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

    Science.gov (United States)

    Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

    2017-12-01

    We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

  13. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. A remark on empirical estimates in multistage stochastic programming

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2002-01-01

    Roč. 9, č. 17 (2002), s. 31-50 ISSN 1212-074X R&D Projects: GA ČR GA402/01/0539; GA ČR GA402/02/1015; GA ČR GA402/01/0034 Institutional research plan: CEZ:AV0Z1075907 Keywords : multistage stochastic programming * empirical estimates * Markov dependence Subject RIV: BB - Applied Statistics, Operational Research

  15. An approach for solving linear fractional programming problems ...

    African Journals Online (AJOL)

    The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebraically using the concept of duality ...

  16. An evolutionary approach to real-time moment magnitude estimation via inversion of displacement spectra

    Science.gov (United States)

    Caprio, M.; Lancieri, M.; Cua, G. B.; Zollo, A.; Wiemer, S.

    2011-01-01

    We present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. The method consists of two components: 1) estimating seismic moment by finding the low frequency plateau Ω0, the corner frequency fc and attenuation factor (Q) that best fit the observed displacement spectra assuming a Brune ω2 model, and 2) estimating magnitude and its uncertainty based on the estimate of seismic moment. A novel characteristic of this method is that is does not rely on empirically derived relationships, but rather involves direct estimation of quantities related to the moment magnitude. SI magnitude and uncertainty estimates are updated each second following the initial P detection. We tested the SI approach on broadband and strong motion waveforms data from 158 Southern California events, and 25 Japanese events for a combined magnitude range of 3 ≤ M ≤ 7. Based on the performance evaluated on this dataset, the SI approach can potentially provide stable estimates of magnitude within 10 seconds from the initial earthquake detection.

  17. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  18. ANN Based Approach for Estimation of Construction Costs of Sports Fields

    Directory of Open Access Journals (Sweden)

    Michał Juszczyk

    2018-01-01

    Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.

  19. Estimating construction and demolition debris generation using a materials flow analysis approach.

    Science.gov (United States)

    Cochran, K M; Townsend, T G

    2010-11-01

    The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. A New Approach to Programming Language Education for Beginners with Top-Down Learning

    Directory of Open Access Journals (Sweden)

    Daisuke Saito

    2013-12-01

    Full Text Available There are two basic approaches in learning new programming language: a bottom-up approach and a top-down approach. It has been said that if a learner has already acquired one language, the top-down approach is more efficient to learn another while, for a person who has absolutely no knowledge of any programming languages; the bottom-up approach is preferable. The major problem of the bottom-up approach is that it requires longer period to acquire the language. For quicker learning, this paper applies a top-down approach for a beginners who has not yet acquired any programming languages.

  1. Technical Note: A comparison of two empirical approaches to estimate in-stream net nutrient uptake

    Science.gov (United States)

    von Schiller, D.; Bernal, S.; Martí, E.

    2011-04-01

    To establish the relevance of in-stream processes on nutrient export at catchment scale it is important to accurately estimate whole-reach net nutrient uptake rates that consider both uptake and release processes. Two empirical approaches have been used in the literature to estimate these rates: (a) the mass balance approach, which considers changes in ambient nutrient loads corrected by groundwater inputs between two stream locations separated by a certain distance, and (b) the spiralling approach, which is based on the patterns of longitudinal variation in ambient nutrient concentrations along a reach following the nutrient spiralling concept. In this study, we compared the estimates of in-stream net nutrient uptake rates of nitrate (NO3) and ammonium (NH4) and the associated uncertainty obtained with these two approaches at different ambient conditions using a data set of monthly samplings in two contrasting stream reaches during two hydrological years. Overall, the rates calculated with the mass balance approach tended to be higher than those calculated with the spiralling approach only at high ambient nitrogen (N) concentrations. Uncertainty associated with these estimates also differed between both approaches, especially for NH4 due to the general lack of significant longitudinal patterns in concentration. The advantages and disadvantages of each of the approaches are discussed.

  2. Comparison of Gene Expression Programming with neuro-fuzzy and neural network computing techniques in estimating daily incoming solar radiation in the Basque Country (Northern Spain)

    International Nuclear Information System (INIS)

    Landeras, Gorka; López, José Javier; Kisi, Ozgur; Shiri, Jalal

    2012-01-01

    Highlights: ► Solar radiation estimation based on Gene Expression Programming is unexplored. ► This approach is evaluated for the first time in this study. ► Other artificial intelligence models (ANN and ANFIS) are also included in the study. ► New alternatives for solar radiation estimation based on temperatures are provided. - Abstract: Surface incoming solar radiation is a key variable for many agricultural, meteorological and solar energy conversion related applications. In absence of the required meteorological sensors for the detection of global solar radiation it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). A comparison was also made among these techniques and traditional temperature based global solar radiation estimation equations. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SS RMSE ), MAE-based skill score (SS MAE ) and r 2 criterion of Nash and Sutcliffe criteria were used to assess the models’ performances. An ANN (a four-input multilayer perceptron with 10 neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m −2 d −1 of RMSE). The ability of GEP approach to model global solar radiation based on daily atmospheric variables was found to be satisfactory.

  3. Budget estimates: Fiscal year 1994. Volume 3: Research and program management

    Science.gov (United States)

    1994-01-01

    The research and program management (R&PM) appropriation provides the salaries, other personnel and related costs, and travel support for NASA's civil service workforce. This FY 1994 budget funds costs associated with 23,623 full-time equivalent (FTE) work years. Budget estimates are provided for all NASA centers by categories such as space station and new technology investments, space flight programs, space science, life and microgravity sciences, advanced concepts and technology, center management and operations support, launch services, mission to planet earth, tracking and data programs, aeronautical research and technology, and safety, reliability, and quality assurance.

  4. Key Aspects of the Federal Direct Loan Program's Cost Estimates: Department of Education. Report to Congressional Requesters.

    Science.gov (United States)

    Calbom, Linda M.; Ashby, Cornelia M.

    Because of concerns about the Department of Education's reliance on estimates to project costs of the William D. Ford Federal Direct Loan Program (FDLP) and a lack of historical information on which to base those estimates, Congress asked the General Accounting Office (GAO) to review how the department develops its cost estimates for the program,…

  5. 2003 status report savings estimates for the energy star(R)voluntary labeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla

    2004-11-09

    ENERGY STAR(R) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2002, what we expect in 2003, and provide savings forecasts for two market penetration scenarios for the period 2003 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  6. 2002 status report: Savings estimates for the ENERGY STAR(R) voluntary labeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan

    2003-03-03

    ENERGY STAR [registered trademark] is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2001, what we expect in 2002, and provide savings forecasts for two market penetration scenarios for the period 2002 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  7. Multifaceted Approach to Designing an Online Masters Program.

    Science.gov (United States)

    McNeil, Sara G.; Chernish, William N.; DeFranco, Agnes L.

    At the Conrad N. Hilton College of Hotel and Restaurant Management at the University of Houston (Texas), the faculty and administrators made a conscious effort to take a broad, extensive approach to designing and implementing a fully online masters program. This approach was entered in a comprehensive needs assessment model and sought input from…

  8. Quadratic programming with fuzzy parameters: A membership function approach

    International Nuclear Information System (INIS)

    Liu, S.-T.

    2009-01-01

    Quadratic programming has been widely applied to solving real world problems. The conventional quadratic programming model requires the parameters to be known constants. In the real world, however, the parameters are seldom known exactly and have to be estimated. This paper discusses the fuzzy quadratic programming problems where the cost coefficients, constraint coefficients, and right-hand sides are represented by convex fuzzy numbers. Since the parameters in the program are fuzzy numbers, the derived objective value is a fuzzy number as well. Using Zadeh's extension principle, a pair of two-level mathematical programs is formulated to calculate the upper bound and lower bound of the objective values of the fuzzy quadratic program. Based on the duality theorem and by applying the variable transformation technique, the pair of two-level mathematical programs is transformed into a family of conventional one-level quadratic programs. Solving the pair of quadratic programs produces the fuzzy objective values of the problem. An example illustrates method proposed in this paper.

  9. Best estimate LB LOCA approach based on advanced thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Sauvage, J.Y.; Gandrille, J.L.; Gaurrand, M.; Rochwerger, D.; Thibaudeau, J.; Viloteau, E.

    2004-01-01

    Improvements achieved in thermal-hydraulics with development of Best Estimate computer codes, have led number of Safety Authorities to preconize realistic analyses instead of conservative calculations. The potentiality of a Best Estimate approach for the analysis of LOCAs urged FRAMATOME to early enter into the development with CEA and EDF of the 2nd generation code CATHARE, then of a LBLOCA BE methodology with BWNT following the Code Scaling Applicability and Uncertainty (CSAU) proceeding. CATHARE and TRAC are the basic tools for LOCA studies which will be performed by FRAMATOME according to either a deterministic better estimate (dbe) methodology or a Statistical Best Estimate (SBE) methodology. (author)

  10. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  11. Estimating absolute configurational entropies of macromolecules: the minimally coupled subspace approach.

    Directory of Open Access Journals (Sweden)

    Ulf Hensen

    Full Text Available We develop a general minimally coupled subspace approach (MCSA to compute absolute entropies of macromolecules, such as proteins, from computer generated canonical ensembles. Our approach overcomes limitations of current estimates such as the quasi-harmonic approximation which neglects non-linear and higher-order correlations as well as multi-minima characteristics of protein energy landscapes. Here, Full Correlation Analysis, adaptive kernel density estimation, and mutual information expansions are combined and high accuracy is demonstrated for a number of test systems ranging from alkanes to a 14 residue peptide. We further computed the configurational entropy for the full 67-residue cofactor of the TATA box binding protein illustrating that MCSA yields improved results also for large macromolecular systems.

  12. Technical Approach and Plan for Transitioning Spent Nuclear Fuel (SNF) Project Facilities to the Environmental Restoration Program

    International Nuclear Information System (INIS)

    SKELLY, W.A.

    1999-01-01

    This document describes the approach and process in which the 100-K Area Facilities are to be deactivated and transitioned over to the Environmental Restoration Program after spent nuclear fuel has been removed from the K Basins. It describes the Transition Project's scope and objectives, work breakdown structure, activity planning, estimated cost, and schedule. This report will be utilized as a planning document for project management and control and to communicate details of project content and integration

  13. 2004 status report: Savings estimates for the Energy Star(R)voluntarylabeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla

    2004-03-09

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), ENERGY STAR labels exist for more thanthirty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2003, whatwe expect in 2004, and provide savings forecasts for two marketpenetration scenarios for the periods 2004 to 2010 and 2004 to 2020. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  14. Estimating pressurized water reactor decommissioning costs: A user's manual for the PWR Cost Estimating Computer Program (CECP) software

    International Nuclear Information System (INIS)

    Bierschbach, M.C.; Mencinsky, G.J.

    1993-10-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user's manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning

  15. A state-and-transition simulation modeling approach for estimating the historical range of variability

    Directory of Open Access Journals (Sweden)

    Kori Blankenship

    2015-04-01

    Full Text Available Reference ecological conditions offer important context for land managers as they assess the condition of their landscapes and provide benchmarks for desired future conditions. State-and-transition simulation models (STSMs are commonly used to estimate reference conditions that can be used to evaluate current ecosystem conditions and to guide land management decisions and activities. The LANDFIRE program created more than 1,000 STSMs and used them to assess departure from a mean reference value for ecosystems in the United States. While the mean provides a useful benchmark, land managers and researchers are often interested in the range of variability around the mean. This range, frequently referred to as the historical range of variability (HRV, offers model users improved understanding of ecosystem function, more information with which to evaluate ecosystem change and potentially greater flexibility in management options. We developed a method for using LANDFIRE STSMs to estimate the HRV around the mean reference condition for each model state in ecosystems by varying the fire probabilities. The approach is flexible and can be adapted for use in a variety of ecosystems. HRV analysis can be combined with other information to help guide complex land management decisions.

  16. Estimation of an Examinee's Ability in the Web-Based Computerized Adaptive Testing Program IRT-CAT

    Directory of Open Access Journals (Sweden)

    Yoon-Hwan Lee

    2006-11-01

    Full Text Available We developed a program to estimate an examinee's ability in order to provide freely available access to a web-based computerized adaptive testing (CAT program. We used PHP and Java Script as the program languages, PostgresSQL as the database management system on an Apache web server and Linux as the operating system. A system which allows for user input and searching within inputted items and creates tests was constructed. We performed an ability estimation on each test based on a Rasch model and 2- or 3-parametric logistic models. Our system provides an algorithm for a web-based CAT, replacing previous personal computer-based ones, and makes it possible to estimate an examinee?占퐏 ability immediately at the end of test.

  17. Appendix F: FreedomCAR and Vehicle Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  18. Consolidated Fuel Reprocessing Program. Operating experience with pulsed-column holdup estimators

    International Nuclear Information System (INIS)

    Ehinger, M.H.

    1986-01-01

    Methods for estimating pulsed-column holdup are being investigated as part of the Safeguards Assessment task of the Consolidated Fuel Reprocessing Program (CFRP) at the Oak Ridge National Laboratory. The CFRP was a major sponsor of test runs at the Barnwell Nuclear Fuel plant (BNFP) in 1980 and 1981. During these tests, considerable measurement data were collected for pulsed columns in the plutonium purification portion of the plant. These data have been used to evaluate and compare three available methods of holdup estimation

  19. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    Science.gov (United States)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  20. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  1. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  2. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    Science.gov (United States)

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  3. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  4. iBEST: a program for burnup history estimation of spent fuels based on ORIGEN-S

    International Nuclear Information System (INIS)

    Kim, Do Yeon; Hong, Ser Gi; Ahn, Gil Hoon

    2015-01-01

    In this paper, we describe a computer program, iBEST (inverse Burnup ESTimator), that we developed to accurately estimate the burnup histories of spent nuclear fuels based on sample measurement data. The burnup history parameters include initial uranium enrichment, burnup, cooling time after discharge from reactor, and reactor type. The program uses algebraic equations derived using the simplified burnup chains of major actinides for initial estimations of burnup and uranium enrichment, and it uses the ORIGEN-S code to correct its initial estimations for improved accuracy. In addition, we newly developed a stable bisection method coupled with ORIGEN-S to correct burnup and enrichment values and implemented it in iBEST in order to fully take advantage of the new capabilities of ORIGEN-S for improving accuracy. The iBEST program was tested using several problems for verification and well-known realistic problems with measurement data from spent fuel samples from the Mihama-3 reactor for validation. The test results show that iBEST accurately estimates the burnup history parameters for the test problems and gives an acceptable level of accuracy for the realistic Mihama-3 problems

  5. Estimating productivity costs using the friction cost approach in practice: a systematic review.

    Science.gov (United States)

    Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna

    2016-01-01

    The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.

  6. An Improved Dynamic Programming Decomposition Approach for Network Revenue Management

    OpenAIRE

    Dan Zhang

    2011-01-01

    We consider a nonlinear nonseparable functional approximation to the value function of a dynamic programming formulation for the network revenue management (RM) problem with customer choice. We propose a simultaneous dynamic programming approach to solve the resulting problem, which is a nonlinear optimization problem with nonlinear constraints. We show that our approximation leads to a tighter upper bound on optimal expected revenue than some known bounds in the literature. Our approach can ...

  7. A catalytic approach to estimate the redox potential of heme-peroxidases

    International Nuclear Information System (INIS)

    Ayala, Marcela; Roman, Rosa; Vazquez-Duhalt, Rafael

    2007-01-01

    The redox potential of heme-peroxidases varies according to a combination of structural components within the active site and its vicinities. For each peroxidase, this redox potential imposes a thermodynamic threshold to the range of oxidizable substrates. However, the instability of enzymatic intermediates during the catalytic cycle precludes the use of direct voltammetry to measure the redox potential of most peroxidases. Here we describe a novel approach to estimate the redox potential of peroxidases, which directly depends on the catalytic performance of the activated enzyme. Selected p-substituted phenols are used as substrates for the estimations. The results obtained with this catalytic approach correlate well with the oxidative capacity predicted by the redox potential of the Fe(III)/Fe(II) couple

  8. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  9. On the implicit programming approach in a class of mathematical programs with equilibrium constraints

    Czech Academy of Sciences Publication Activity Database

    Outrata, Jiří; Červinka, Michal

    2009-01-01

    Roč. 38, 4B (2009), s. 1557-1574 ISSN 0324-8569 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10750506 Keywords : mathematical problem with equilibrium constraint * state constraints * implicit programming * calmness * exact penalization Subject RIV: BA - General Mathematics Impact factor: 0.378, year: 2009 http://library.utia.cas.cz/separaty/2010/MTR/outrata-on the implicit programming approach in a class of mathematical programs with equilibrium constraints.pdf

  10. A cutting- plane approach for semi- infinite mathematical programming

    African Journals Online (AJOL)

    Many situations ranging from industrial to social via economic and environmental problems may be cast into a Semi-infinite mathematical program. In this paper, the cutting-plane approach which lends itself better for standard non-linear programs is exploited with good reasons for grappling with linear, convex and ...

  11. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    Science.gov (United States)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Science.gov (United States)

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  14. Optimization of the representativeness and transposition approach, for the neutronic design of experimental programs in critical mock-up

    International Nuclear Information System (INIS)

    Dos-Santos, N.

    2013-01-01

    The work performed during this thesis focused on uncertainty propagation (nuclear data, technological uncertainties, calculation biases,...) on integral parameters, and the development of a novel approach enabling to reduce this uncertainty a priori directly from the design phase of a new experimental program. This approach is based on a multi-parameter multi-criteria extension of representativeness and transposition theories. The first part of this PhD work covers an optimization study of sensitivity and uncertainty calculation schemes to different modeling scales (cell, assembly and whole core) for LWRs and FBRs. A degraded scheme, based on standard and generalized perturbation theories, has been validated for the calculation of uncertainty propagation to various integral quantities of interest. It demonstrated the good a posteriori representativeness of the EPICURE experiment for the validation of mixed UOX-MOX loadings, as the importance of some nuclear data in the power tilt phenomenon in large LWR cores. The second part of this work was devoted to methods and tools development for the optimized design of experimental programs in ZPRs. Those methods are based on multi-parameters representativeness using simultaneously various quantities of interest. Finally, an original study has been conducted on the rigorous estimation of correlations between experimental programs in the transposition process. The coupling of experimental correlations and multi-parametric representativeness approach enables to efficiently design new programs, able to answer additional qualification requirements on calculation tools. (author) [fr

  15. A quantitative experimental paradigm to optimize construction of rank order lists in the National Resident Matching Program: the ROSS-MOORE approach.

    Science.gov (United States)

    Ross, David A; Moore, Edward Z

    2013-09-01

    As part of the National Resident Matching Program, programs must submit a rank order list of desired applicants. Despite the importance of this process and the numerous manifest limitations with traditional approaches, minimal research has been conducted to examine the accuracy of different ranking strategies. The authors developed the Moore Optimized Ordinal Rank Estimator (MOORE), a novel algorithm for ranking applicants that is based on college sports ranking systems. Because it is not possible to study the Match in vivo, the authors then designed the Recruitment Outcomes Simulation System (ROSS). This program was used to simulate a series of interview seasons and to compare MOORE and traditional approaches under different conditions. The accuracy of traditional ranking and the MOORE approach are equally and adversely affected with higher levels of intrarater variability. However, compared with traditional ranking methods, MOORE produces a more accurate rank order list as interrater variability increases. The present data demonstrate three key findings. First, they provide proof of concept that it is possible to scientifically test the accuracy of different rank methods used in the Match. Second, they show that small amounts of variability can have a significant adverse impact on the accuracy of rank order lists. Finally, they demonstrate that an ordinal approach may lead to a more accurate rank order list in the presence of interviewer bias. The ROSS-MOORE approach offers programs a novel way to optimize the recruitment process and, potentially, to construct a more accurate rank order list.

  16. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  17. Concurrent object-oriented programming: The MP-Eiffel approach

    OpenAIRE

    Silva, Miguel Augusto Mendes Oliveira e

    2004-01-01

    This article evaluates several possible approaches for integrating concurrency into object-oriented programming languages, presenting afterwards, a new language named MP-Eiffel. MP-Eiffel was designed attempting to include all the essential properties of both concurrent and object-oriented programming with simplicity and safety. A special care was taken to achieve the orthogonality of all the language mechanisms, allowing their joint use without unsafe side-effects (such as inh...

  18. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  19. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    International Nuclear Information System (INIS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-01-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  20. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  1. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    OpenAIRE

    Vjekoslav Klaric

    2011-01-01

    This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the defi nitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the ...

  2. Constraint Logic Programming approach to protein structure prediction

    Directory of Open Access Journals (Sweden)

    Fogolari Federico

    2004-11-01

    Full Text Available Abstract Background The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Results Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. Conclusions The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  3. Constraint Logic Programming approach to protein structure prediction.

    Science.gov (United States)

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  4. Automatic Sky View Factor Estimation from Street View Photographs—A Big Data Approach

    Directory of Open Access Journals (Sweden)

    Jianming Liang

    2017-04-01

    Full Text Available Hemispherical (fisheye photography is a well-established approach for estimating the sky view factor (SVF. High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV and Tencent Street View (TSV focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM and an oblique airborne photogrammetry-based 3D city model (OAP3D, resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications.

  5. RiD: A New Approach to Estimate the Insolvency Risk

    Directory of Open Access Journals (Sweden)

    Marco Aurélio dos Santos Sanfins

    2014-10-01

    Full Text Available Given the recent international crises and the increasing number of defaults, several researchers have attempted to develop metrics that calculate the probability of insolvency with higher accuracy. The approaches commonly used, however, do not consider the credit risk nor the severity of the distance between receivables and obligations among different periods. In this paper we mathematically present an approach that allow us to estimate the insolvency risk by considering not only future receivables and obligations, but the severity of the distance between them and the quality of the respective receivables. Using Monte Carlo simulations and hypothetical examples, we show that our metric is able to estimate the insolvency risk with high accuracy. Moreover, our results suggest that in the absence of a smooth distribution between receivables and obligations, there is a non-null insolvency risk even when the present value of receivables is larger than the present value of the obligations.

  6. Design and implementation of estimation-based monitoring programs for flora and fauna: A case study on the Cherokee National Forest

    Science.gov (United States)

    Klimstra, J.D.; O'Connell, A.F.; Pistrang, M.J.; Lewis, L.M.; Herrig, J.A.; Sauer, J.R.

    2007-01-01

    Science-based monitoring of biological resources is important for a greater understanding of ecological systems and for assessment of the target population using theoretic-based management approaches. When selecting variables to monitor, managers first need to carefully consider their objectives, the geographic and temporal scale at which they will operate, and the effort needed to implement the program. Generally, monitoring can be divided into two categories: index and inferential. Although index monitoring is usually easier to implement, analysis of index data requires strong assumptions about consistency in detection rates over time and space, and parameters are often biasednot accounting for detectability and spatial variation. In most cases, individuals are not always available for detection during sampling periods, and the entire area of interest cannot be sampled. Conversely, inferential monitoring is more rigorous because it is based on nearly unbiased estimators of spatial distribution. Thus, we recommend that detectability and spatial variation be considered for all monitoring programs that intend to make inferences about the target population or the area of interest. Application of these techniques is especially important for the monitoring of Threatened and Endangered (T&E) species because it is critical to determine if population size is increasing or decreasing with some level of certainty. Use of estimation-based methods and probability sampling will reduce many of the biases inherently associated with index data and provide meaningful information with respect to changes that occur in target populations. We incorporated inferential monitoring into protocols for T&E species spanning a wide range of taxa on the Cherokee National Forest in the Southern Appalachian Mountains. We review the various approaches employed for different taxa and discuss design issues, sampling strategies, data analysis, and the details of estimating detectability using site

  7. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  8. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  9. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  10. Unified approach for estimating the probabilistic design S-N curves of three commonly used fatigue stress-life models

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Wang Jinnuo; Gao Qing

    2001-01-01

    A unified approach, referred to as general maximum likelihood method, is presented for estimating probabilistic design S-N curves and their confidence bounds of the three commonly used fatigue stress-life models, namely three parameter, Langer and Basquin. The curves are described by a general form of mean and standard deviation S-N curves of the logarithm of fatigue life. Different from existent methods, i.e., the conventional method and the classical maximum likelihood method,present approach considers the statistical characteristics of whole test data. The parameters of the mean curve is firstly estimated by least square method and then, the parameters of the standard deviation curve is evaluated by mathematical programming method to be agreement with the maximum likelihood principle. Fit effects of the curves are assessed by fitted relation coefficient, total fitted standard error and the confidence bounds. Application to the virtual stress amplitude-crack initiation life data of a nuclear engineering material, Chinese 1Cr18Ni9Ti stainless steel pipe-weld metal, has indicated the validity of the approach to the S-N data where both S and N show the character of random variable. Practices to the two states of S-N data of Chinese 45 carbon steel notched specimens (k t = 2.0) have indicated the validity of present approach to the test results obtained respectively from group fatigue test and from maximum likelihood fatigue test. At the practices, it was revealed that in general the fit is best for the three-parameter model,slightly inferior for the Langer relation and poor for the Basquin equation. Relative to the existent methods, present approach has better fit. In addition, the possible non-conservative predictions of the existent methods, which are resulted from the influence of local statistical characteristics of the data, are also overcome by present approach

  11. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  12. A holistic approach to age estimation in refugee children.

    Science.gov (United States)

    Sypek, Scott A; Benson, Jill; Spanner, Kate A; Williams, Jan L

    2016-06-01

    Many refugee children arriving in Australia have an inaccurately documented date of birth (DOB). A medical assessment of a child's age is often requested when there is a concern that their documented DOB is incorrect. This study's aim was to assess the accuracy a holistic age assessment tool (AAT) in estimating the age of refugee children newly settled in Australia. A holistic AAT that combines medical and non-medical approaches was used to estimate the ages of 60 refugee children with a known DOB. The tool used four components to assess age: an oral narrative, developmental assessment, anthropometric measures and pubertal assessment. Assessors were blinded to the true age of the child. Correlation coefficients for the actual and estimated age were calculated for the tool overall and individual components. The correlation coefficient between the actual and estimated age from the AAT was very strong at 0.9802 (boys 0.9748, girls 0.9876). The oral narrative component of the tool performed best (R = 0.9603). Overall, 86.7% of age estimates were within 1 year of the true age. The range of differences was -1.43 to 3.92 years with a standard deviation of 0.77 years (9.24 months). The AAT is a holistic, simple and safe instrument that can be used to estimate age in refugee children with results comparable with radiological methods currently used. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  13. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  14. Estimating the return on investment in disease management programs using a pre-post analysis.

    Science.gov (United States)

    Fetterolf, Donald; Wennberg, David; Devries, Andrea

    2004-01-01

    Disease management programs have become increasingly popular over the past 5-10 years. Recent increases in overall medical costs have precipitated new concerns about the cost-effectiveness of medical management programs that have extended to the program directors for these programs. Initial success of the disease management movement is being challenged on the grounds that reported results have been the result of the application of faulty, if intuitive, methodologies. This paper discusses the use of "pre-post" methodology approaches in the analysis of disease management programs, and areas where application of this approach can result in spurious results and incorrect financial outcome assessments. The paper includes a checklist of these items for use by operational staff working with the programs, and a comprehensive bibliography that addresses many of the issues discussed.

  15. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  16. Linear decomposition approach for a class of nonconvex programming problems.

    Science.gov (United States)

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  17. Savings estimates for the ENERGY STAR (registered trademark) voluntary labeling program: 2001 status report

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; Mahajan, Akshay; Koomey, Jonathan G.

    2002-02-15

    ENERGY STAR(Registered Trademark) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2000, what we expect in 2001, and provide savings forecasts for two market penetration scenarios for the period 2001 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  18. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    Science.gov (United States)

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  19. An approach for evaluating the market effects of energy efficiency programs

    International Nuclear Information System (INIS)

    Vine, E.; Prahl, R.; Meyers, S.; Turiel, I.

    2010-01-01

    This paper presents work currently being carried out in California on evaluating market effects. We first outline an approach for conducting market effect studies that includes the six key steps that were developed in study plans: (1) a scoping study that characterizes a particular market, reviews relevant market effects studies, develops integrated market and program theories, and identifies market indicators; (2) analysis of market evolution, using existing data sources; (3) analysis of market effects, based on sales data and interviews with key market actors; (4) analysis of attribution; (5) estimation of energy savings; and (6) assessment of sustainability (i.e., the extent to which any observed market effects are likely to persist in the absence or reduction of public intervention, and thus has helped to transform the market). We describe the challenges in conducting this type of analysis (1) selecting a comparison state(s) to California for a baseline, (2) availability and quality of data (limiting analyses), (3) inconsistent patterns of results, and (4) conducting market effects evaluations at one point in time, without the benefit of years of accumulated research findings, and then provide some suggestions for future research on the evaluation of market effects. With the promulgation of market transformation programs, the evaluation of market effects will be critical. We envision that these market effects studies will help lay the foundation for the refinement of techniques for measuring the impacts of programs that seek to transform markets for energy efficiency products and practices.

  20. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  1. Development of a package program for estimating ground level concentrations of radioactive gases

    International Nuclear Information System (INIS)

    Nilkamhang, W.

    1986-01-01

    A package program for estimating ground level concentration of radioactive gas from elevate release was develop for use on IBM P C microcomputer. The main program, GAMMA PLUME NT10, is based on the well known VALLEY MODEL which is a Fortran computer code intended for mainframe computers. Other two options were added, namely, calculation of radioactive gas ground level concentration in Ci/m 3 and dose equivalent rate in mren/hr. In addition, a menu program and editor program were developed to render the program easier to use since the option could be readily selected and the input data could be easily modified as required through the keyboard. The accuracy and reliability of the program is almost identical to the mainframe. Ground level concentration of radioactive radon gas due to ore program processing in the nuclear chemistry laboratory of the Department of Nuclear Technology was estimated. In processing radioactive ore at a rate of 2 kg/day, about 35 p Ci/s of radioactive gas was released from a 14 m stack. When meteorological data of Don Muang (average for 5 years 1978-1982) were used maximum ground level concentration and the dose equivalent rate were found to be 0.00094 p Ci/m 3 and 5.0 x 10 -10 mrem/hr respectively. The processing time required for the above problem was about 7 minutes for any case of source on IBM P C which was acceptable for a computer of this class

  2. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    Science.gov (United States)

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  3. Self-regulated learning in higher education: strategies adopted by computer programming students when supported by the SimProgramming approach

    Directory of Open Access Journals (Sweden)

    Daniela Pedrosa

    Full Text Available Abstract The goal of the SimProgramming approach is to help students overcome their learning difficulties in the transition from entry-level to advanced computer programming, developing an appropriate set of learning strategies. We implemented it at the University of Trás-os-Montes e Alto Douro (Portugal, in two courses (PM3 and PM4 of the bachelor programmes in Informatics Engineering and ICT. We conducted semi-structured interviews with students (n=38 at the end of the courses, to identify the students’ strategies for self-regulation of learning in the assignment. We found that students changed some of their strategies from one course edition to the following one and that changes are related to the SimProgramming approach. We believe that changes to the educational approach were appropriate to support the assignment goals. We recommend applying the SimProgramming approach in other educational contexts, to improve educational practices by including techniques to help students in their learning.

  4. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    Directory of Open Access Journals (Sweden)

    Vjekoslav Klarić

    2011-03-01

    Full Text Available This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the definitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the theory behind it, the MIMIC model is then applied to the Croatian economy. Considering the described characteristics of different methods, a previous estimate of the size of the non-observed economy in Croatia is chosen to provide benchmark values for the MIMIC model. Using those, the estimates of the size of non-observed economy in Croatia during the period 1998-2009 are obtained.

  5. Cost estimation: An expert-opinion approach. [cost analysis of research projects using the Delphi method (forecasting)

    Science.gov (United States)

    Buffalano, C.; Fogleman, S.; Gielecki, M.

    1976-01-01

    A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.

  6. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  7. 2005 Status Report Savings Estimates for the ENERGY STAR(R)Voluntary Labeling Program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; Sanchez, Marla

    2006-03-07

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), Energy Star labels exist for more thanforty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2004, whatwe expect in 2005, and provide savings forecasts for two marketpenetration scenarios for the periods 2005 to 2010 and 2005 to 2020. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  8. 2007 Status Report: Savings Estimates for the ENERGY STAR(R)VoluntaryLabeling Program

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Marla; Webber, Carrie A.; Brown, Richard E.; Homan,Gregory K.

    2007-03-23

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), ENERGY STAR labels exist for more thanthirty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2006, whatwe expect in 2007, and provide savings forecasts for two marketpenetration scenarios for the periods 2007 to 2015 and 2007 to 2025. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  9. Peak flood estimation using gene expression programming

    Science.gov (United States)

    Zorn, Conrad R.; Shamseldin, Asaad Y.

    2015-12-01

    As a case study for the Auckland Region of New Zealand, this paper investigates the potential use of gene-expression programming (GEP) in predicting specific return period events in comparison to the established and widely used Regional Flood Estimation (RFE) method. Initially calibrated to 14 gauged sites, the GEP derived model was further validated to 10 and 100 year flood events with a relative errors of 29% and 18%, respectively. This is compared to the RFE method providing 48% and 44% errors for the same flood events. While the effectiveness of GEP in predicting specific return period events is made apparent, it is argued that the derived equations should be used in conjunction with those existing methodologies rather than as a replacement.

  10. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

    1989-01-01

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  11. Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming

    KAUST Repository

    Claudel, Christian G.; Chamoin, Timothee; Bayen, Alexandre M.

    2014-01-01

    This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.

  12. A Novel Rules Based Approach for Estimating Software Birthmark

    Science.gov (United States)

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  13. Novel approaches to the estimation of intake and bioavailability of radiocaesium in ruminants grazing forested areas

    International Nuclear Information System (INIS)

    Mayes, R.W.; Lamb, C.S.; Beresford, N.A.

    1994-01-01

    It is difficult to measure transfer of radiocaesium to the tissues of forest ruminants because they can potentially ingest a wide range of plant types. Measurements on undomesticated forest ruminants incur further difficulties. Existing techniques of estimating radiocaesium intake are imprecise when applied to forest systems. New approaches to measure this parameter are discussed. Two methods of intake estimation are described and evaluated. In the first method, radiocaesium intake is estimated from the radiocaesium activity concentrations of plants, combined with estimates of dry-matter (DM) intake and plant species composition of the diet, using plant and orally-dosed hydrocarbons (n-alkanes) as markers. The second approach estimates the total radiocaesium intake of an animal from the rate of excretion of radiocaesium in the faeces and an assumed value for the apparent absorption coefficient. Estimates of radiocaesium intake, using these approaches, in lactating goats and adult sheep were used to calculate transfer coefficients for milk and muscle; these compared favourably with transfer coefficients previously obtained under controlled experimental conditions. Potential variations in bioavailability of dietary radiocaesium sources to forest ruminants have rarely been considered. Approaches that can be used to describe bioavailability, including the true absorption coefficient and in vitro extractability, are outlined

  14. Universal Approach to Estimate Perfluorocarbons Emissions During Individual High-Voltage Anode Effect for Prebaked Cell Technologies

    Science.gov (United States)

    Dion, Lukas; Gaboury, Simon; Picard, Frédéric; Kiss, Laszlo I.; Poncsak, Sandor; Morais, Nadia

    2018-04-01

    Recent investigations on aluminum electrolysis cell demonstrated limitations to the commonly used tier-3 slope methodology to estimate perfluorocarbon (PFC) emissions from high-voltage anode effects (HVAEs). These limitations are greater for smelters with a reduced HVAE frequency. A novel approach is proposed to estimate the specific emissions using a tier 2 model resulting from individual HVAE instead of estimating monthly emissions for pot lines with the slope methodology. This approach considers the nonlinear behavior of PFC emissions as a function of the polarized anode effect duration but also integrates the change in behavior attributed to cell productivity. Validation was performed by comparing the new approach and the slope methodology with measurement campaigns from different smelters. The results demonstrate a good agreement between measured and estimated emissions as well as more accurately reflect individual HVAE dynamics occurring over time. Finally, the possible impact of this approach for the aluminum industry is discussed.

  15. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  16. Appendix B: Hydrogen, Fuel Cells, and Infrastructure Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  17. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  18. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  19. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  20. Seismic Safety Margins Research Program (Phase I). Project VII. Systems analysis specification of computational approach

    International Nuclear Information System (INIS)

    Wall, I.B.; Kaul, M.K.; Post, R.I.; Tagart, S.W. Jr.; Vinson, T.J.

    1979-02-01

    An initial specification is presented of a computation approach for a probabilistic risk assessment model for use in the Seismic Safety Margin Research Program. This model encompasses the whole seismic calculational chain from seismic input through soil-structure interaction, transfer functions to the probability of component failure, integration of these failures into a system model and thereby estimate the probability of a release of radioactive material to the environment. It is intended that the primary use of this model will be in sensitivity studies to assess the potential conservatism of different modeling elements in the chain and to provide guidance on priorities for research in seismic design of nuclear power plants

  1. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  2. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  3. Unsteady force estimation using a Lagrangian drift-volume approach

    Science.gov (United States)

    McPhaden, Cameron J.; Rival, David E.

    2018-04-01

    A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.

  4. Estimation of stature from hand impression: a nonconventional approach.

    Science.gov (United States)

    Ahemad, Nasir; Purkait, Ruma

    2011-05-01

    Stature is used for constructing a biological profile that assists with the identification of an individual. So far, little attention has been paid to the fact that stature can be estimated from hand impressions left at scene of crime. The present study based on practical observations adopted a new methodology of measuring hand length from the depressed area between hypothenar and thenar region on the proximal surface of the palm. Stature and bilateral hand impressions were obtained from 503 men of central India. Seventeen dimensions of hand were measured on the impression. Linear regression equations derived showed hand length followed by palm length are best estimates of stature. Testing the practical utility of the suggested method on latent prints of 137 subjects, a statistically insignificant result was obtained when known and estimated stature derived from latent prints was compared. The suggested approach points to a strong possibility of its usage in crime scene investigation, albeit the fact that validation studies in real-life scenarios are performed. © 2011 American Academy of Forensic Sciences.

  5. Piecewise Loglinear Estimation of Efficient Production Surfaces

    OpenAIRE

    Rajiv D. Banker; Ajay Maindiratta

    1986-01-01

    Linear programming formulations for piecewise loglinear estimation of efficient production surfaces are derived from a set of basic properties postulated for the underlying production possibility sets. Unlike the piecewise linear model of Banker, Charnes, and Cooper (Banker R. D., A. Charnes, W. W. Cooper. 1984. Models for the estimation of technical and scale inefficiencies in data envelopment analysis. Management Sci. 30 (September) 1078--1092.), this approach permits the identification of ...

  6. Evaluating and Estimating the WCET Criticality Metric

    DEFF Research Database (Denmark)

    Jordan, Alexander

    2014-01-01

    a programmer (or compiler) from targeting optimizations the right way. A possible resort is to use a metric that targets WCET and which can be efficiently computed for all code parts of a program. Similar to dynamic profiling techniques, which execute code with input that is typically expected...... for the application, based on WCET analysis we can indicate how critical a code fragment is, in relation to the worst-case bound. Computing such a metric on top of static analysis, incurs a certain overhead though, which increases with the complexity of the underlying WCET analysis. We present our approach...... to estimate the Criticality metric, by relaxing the precision of WCET analysis. Through this, we can reduce analysis time by orders of magnitude, while only introducing minor error. To evaluate our estimation approach and share our garnered experience using the metric, we evaluate real-time programs, which...

  7. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    An integrated approach to estimate the storage reliability is proposed. • A non-parametric measure to estimate the number of failures and the reliability at each testing time is presented. • E-Baysian method to estimate the failure probability is introduced. • The possible initial failures in storage are introduced. • The non-parametric estimates of failure numbers can be used into the parametric models.

  8. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    Science.gov (United States)

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Training Program Handbook: A systematic approach to training

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    This DOE handbook describes a systematic method for establishing and maintaining training programs that meet the requirements and expectations of DOE Orders 5480.18B and 5480.20. The systematic approach to training includes 5 phases: Analysis, design, development, implementation, and evaluation.

  10. Gene expression programming approach for the estimation of moisture ratio in herbal plants drying with vacuum heat pump dryer

    Science.gov (United States)

    Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan

    2017-07-01

    The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.

  11. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  12. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    International Nuclear Information System (INIS)

    Hamza, A.; Jan, T.; Ali, A.

    2016-01-01

    In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)

  13. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  14. Cost estimation model for advanced planetary programs, fourth edition

    Science.gov (United States)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  15. An approach to the estimation of the value of agricultural residues used as biofuels

    International Nuclear Information System (INIS)

    Kumar, A.; Purohit, P.; Rana, S.; Kandpal, T.C.

    2002-01-01

    A simple demand side approach for estimating the monetary value of agricultural residues used as biofuels is proposed. Some of the important issues involved in the use of biomass feedstocks in coal-fired boilers are briefly discussed along with their implications for the maximum acceptable price estimates for the agricultural residues. Results of some typical calculations are analysed along with the estimates obtained on the basis of a supply side approach (based on production cost) developed earlier. The prevailing market prices of some agricultural residues used as feedstocks for briquetting are also indicated. The results obtained can be used as preliminary indicators for identifying niche areas for immediate/short-term utilization of agriculture residues in boilers for process heating and power generation. (author)

  16. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  17. Savings estimates for the ENERGY STAR (registered trademark) voluntary labeling program: 2001 status report; TOPICAL

    International Nuclear Information System (INIS)

    Webber, Carrie A.; Brown, Richard E.; Mahajan, Akshay; Koomey, Jonathan G.

    2002-01-01

    ENERGY STAR(Registered Trademark) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2000, what we expect in 2001, and provide savings forecasts for two market penetration scenarios for the period 2001 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period

  18. Interactive Approach for Multi-Level Multi-Objective Fractional Programming Problems with Fuzzy Parameters

    Directory of Open Access Journals (Sweden)

    M.S. Osman

    2018-03-01

    Full Text Available In this paper, an interactive approach for solving multi-level multi-objective fractional programming (ML-MOFP problems with fuzzy parameters is presented. The proposed interactive approach makes an extended work of Shi and Xia (1997. In the first phase, the numerical crisp model of the ML-MOFP problem has been developed at a confidence level without changing the fuzzy gist of the problem. Then, the linear model for the ML-MOFP problem is formulated. In the second phase, the interactive approach simplifies the linear multi-level multi-objective model by converting it into separate multi-objective programming problems. Also, each separate multi-objective programming problem of the linear model is solved by the ∊-constraint method and the concept of satisfactoriness. Finally, illustrative examples and comparisons with the previous approaches are utilized to evince the feasibility of the proposed approach.

  19. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  20. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  1. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    Science.gov (United States)

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  2. PlayIt: Game Based Learning Approach for Teaching Programming Concepts

    Science.gov (United States)

    Mathrani, Anuradha; Christian, Shelly; Ponder-Sutton, Agate

    2016-01-01

    This study demonstrates a game-based learning (GBL) approach to engage students in learning and enhance their programming skills. The paper gives a detailed narrative of how an educational game was mapped with the curriculum of a prescribed programming course in a computing diploma study programme. Two separate student cohorts were invited to…

  3. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  4. An EKF-based approach for estimating leg stiffness during walking.

    Science.gov (United States)

    Ochoa-Diaz, Claudia; Menegaz, Henrique M; Bó, Antônio P L; Borges, Geovany A

    2013-01-01

    The spring-like behavior is an inherent condition for human walking and running. Since leg stiffness k(leg) is a parameter that cannot be directly measured, many techniques has been proposed in order to estimate it, most of them using force data. This paper intends to address this problem using an Extended Kalman Filter (EKF) based on the Spring-Loaded Inverted Pendulum (SLIP) model. The formulation of the filter only uses as measurement information the Center of Mass (CoM) position and velocity, no a priori information about the stiffness value is known. From simulation results, it is shown that the EKF-based approach can generate a reliable stiffness estimation for walking.

  5. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    Science.gov (United States)

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Interactive Fuzzy Goal Programming approach in multi-response stratified sample surveys

    Directory of Open Access Journals (Sweden)

    Gupta Neha

    2016-01-01

    Full Text Available In this paper, we applied an Interactive Fuzzy Goal Programming (IFGP approach with linear, exponential and hyperbolic membership functions, which focuses on maximizing the minimum membership values to determine the preferred compromise solution for the multi-response stratified surveys problem, formulated as a Multi- Objective Non Linear Programming Problem (MONLPP, and by linearizing the nonlinear objective functions at their individual optimum solution, the problem is approximated to an Integer Linear Programming Problem (ILPP. A numerical example based on real data is given, and comparison with some existing allocations viz. Cochran’s compromise allocation, Chatterjee’s compromise allocation and Khowaja’s compromise allocation is made to demonstrate the utility of the approach.

  7. Parametric estimation in the wave buoy analogy - an elaborated approach based on energy considerations

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam

    2014-01-01

    the ship’s wave-induced responses based on different statistical inferences including parametric and non-parametric approaches. This paper considers a concept to improve the estimate obtained by the parametric method for sea state estimation. The idea is illustrated by an analysis made on full-scale...

  8. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    David A. Swanson

    2012-07-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  9. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    Lucky Tedrow

    2012-01-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  10. Aspects of using a best-estimate approach for VVER safety analysis in reactivity initiated accidents

    Energy Technology Data Exchange (ETDEWEB)

    Ovdiienko, Iurii; Bilodid, Yevgen; Ieremenko, Maksym [State Scientific and Technical Centre on Nuclear and Radiation, Safety (SSTC N and RS), Kyiv (Ukraine); Loetsch, Thomas [TUEV SUED Industrie Service GmbH, Energie und Systeme, Muenchen (Germany)

    2016-09-15

    At present time, Ukraine faces the problem of small margins of acceptance criteria in connection with the implementation of a conservative approach for safety evaluations. The problem is particularly topical conducting feasibility analysis of power up-rating for Ukrainian nuclear power plants. Such situation requires the implementation of a best-estimate approach on the basis of an uncertainty analysis. For some kind of accidents, such as loss-of-coolant accident (LOCA), the best estimate approach is, more or less, developed and established. However, for reactivity initiated accident (RIA) analysis an application of best estimate method could be problematical. A regulatory document in Ukraine defines a nomenclature of neutronics calculations and so called ''generic safety parameters'' which should be used as boundary conditions for all VVER-1000 (V-320) reactors in RIA analysis. In this paper the ideas of uncertainty evaluations of generic safety parameters in RIA analysis in connection with the use of the 3D neutron kinetic code DYN3D and the GRS SUSA approach are presented.

  11. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of a measured program devoted to parameter identification of structural dynamic systems is considered, the design problem is formulated as an optimization problem due to minimize the total expected cost of the measurement program. All the calculations are based on a priori knowledge...... and engineering judgement. One of the contribution of the approach is that the optimal nmber of sensors can be estimated. This is sown in an numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program for estimating the modal damping parameters...

  12. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  13. A filtering approach to edge preserving MAP estimation of images.

    Science.gov (United States)

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  14. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  15. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  16. H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.

  17. Program for shaping neutron microconstants for calculations by means of the Monte-Carlo method on the base of estimated data files (NEDAM)

    International Nuclear Information System (INIS)

    Zakharov, L.N.; Markovskij, D.V.; Frank-Kamenetskij, A.D.; Shatalov, G.E.

    1978-01-01

    The program for shaping neutron microconstants for calculations by means of the Monte-Carlo method, oriented on the detailed consideration of processes in the quick region. The initial information is files of the estimated datea within the UKNDL formate. The method combines the group approach to representation of the process probability and anisotropy of the elastic scattering with the individual description of the secondary neutron spectra of non-elastic processes. The NEDAM program is written in the FORTRAN language for BESM-6 computer and has the following characteristics: the initial file length of the evaluated data is 20000 words, the multigroup constant file length equals 8000 words, the MARK massive length equals 1000 words. The calculation time of a single variant equals 1-2 min

  18. A Mobile Device Based Serious Gaming Approach for Teaching and Learning Java Programming

    Directory of Open Access Journals (Sweden)

    Tobias Jordine

    2015-01-01

    Full Text Available Most first year computer science students find that learning object-oriented programming is hard. Serious games have ever been used as one approach to handle this problem. But most of them cannot be played with mobile devices. This obviously does not suit the era of mobile computing that intends to allow students to learn programming skills in anytime anywhere. To enhance mobile teaching and learning, a research project started over a year ago and aims to create a mobile device based serious gaming approach along with a serious game for enhancing mobile teaching and learning for Java programming. So far the project has completed a literature review for understanding existing work and identifying problems in this area, conducted a survey for eliciting students’ requirements for mobile gaming approach, and then established a mobile-device based serious gaming approach with a developed prototype of the game. This paper introduces the project in details, and in particularly presents and discusses its current results. It is expected that the presented project will be helpful and useful to bring more efficient approaches with new mobile games into teaching object-oriented programming and to enhance students’ learning experiences.

  19. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    Science.gov (United States)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  20. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    International Nuclear Information System (INIS)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji

    2010-01-01

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  1. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    Energy Technology Data Exchange (ETDEWEB)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji [Department of Economics, University of Ibadan, Ibadan (Nigeria)

    2010-01-15

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  2. FRACTURE MECHANICS APPROACH TO ESTIMATE FATIGUE LIVES OF WELDED LAP-SHEAR SPECIMENS

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.; Michigan, J.

    2014-04-25

    A full range of stress intensity factor solutions for a kinked crack is developed as a function of weld width and the sheet thickness. When used with the associated main crack solutions (global stress intensity factors) in terms of the applied load and specimen geometry, the fatigue lives can be estimated for the laser-welded lap-shear specimens. The estimations are in good agreement with the experimental data. A classical solution for an infinitesimal kink is also employed in the approach. However, the life predictions tend to overestimate the actual fatigue lives. The traditional life estimations with the structural stress along with the experimental stress-fatigue life data (S-N curve) are also provided. In this case, the estimations only agree with the experimental data under higher load conditions.

  3. Savings estimates for the Energy Star(registered trademark) voluntary labeling program

    International Nuclear Information System (INIS)

    Webber, Carrie A.; Brown, Richard E.; Koomey, Jonathan G.

    2000-01-01

    ENERGY STAR7 is a voluntary labeling program designed to identify and promote energy-efficient products. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than twenty products, spanning office equipment, residential heating and cooling equipment, new homes, commercial and residential lighting, home electronics, and major appliances. We present estimates of the energy, dollar and carbon savings already achieved by the program and provide savings forecasts for several market penetration scenarios for the period 2001 to 2010. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period. Finally, we assess the sensitivity of our target penetration case forecasts to greater or lesser marketing success by EPA and DOE, lower-than-expected future energy prices, and higher or lower rates of carbon emissions by electricity generators

  4. A Nonlinear Programming and Artificial Neural Network Approach for Optimizing the Performance of a Job Dispatching Rule in a Wafer Fabrication Factory

    Directory of Open Access Journals (Sweden)

    Toly Chen

    2012-01-01

    Full Text Available A nonlinear programming and artificial neural network approach is presented in this study to optimize the performance of a job dispatching rule in a wafer fabrication factory. The proposed methodology fuses two existing rules and constructs a nonlinear programming model to choose the best values of parameters in the two rules by dynamically maximizing the standard deviation of the slack, which has been shown to benefit scheduling performance by several studies. In addition, a more effective approach is also applied to estimate the remaining cycle time of a job, which is empirically shown to be conducive to the scheduling performance. The efficacy of the proposed methodology was validated with a simulated case; evidence was found to support its effectiveness. We also suggested several directions in which it can be exploited in the future.

  5. Different approaches to estimation of reactor pressure vessel material embrittlement

    Directory of Open Access Journals (Sweden)

    V. M. Revka

    2013-03-01

    Full Text Available The surveillance test data for the nuclear power plant which is under operation in Ukraine have been used to estimate WWER-1000 reactor pressure vessel (RPV material embrittlement. The beltline materials (base and weld metal were characterized using Charpy impact and fracture toughness test methods. The fracture toughness test data were analyzed according to the standard ASTM 1921-05. The pre-cracked Charpy specimens were tested to estimate a shift of reference temperature T0 due to neutron irradiation. The maximum shift of reference temperature T0 is 84 °C. A radiation embrittlement rate AF for the RPV material was estimated using fracture toughness test data. In addition the AF factor based on the Charpy curve shift (ΔTF has been evaluated. A comparison of the AF values estimated according to different approaches has shown there is a good agreement between the radiation shift of Charpy impact and fracture toughness curves for weld metal with high nickel content (1,88 % wt. Therefore Charpy impact test data can be successfully applied to estimate the fracture toughness curve shift and therefore embrittlement rate. Furthermore it was revealed that radiation embrittlement rate for weld metal is higher than predicted by a design relationship. The enhanced embrittlement is most probably related to simultaneously high nickel and high manganese content in weld metal.

  6. Estimating the cost of saving electricity through U.S. utility customer-funded energy efficiency programs

    International Nuclear Information System (INIS)

    Hoffman, Ian M.; Goldman, Charles A.; Rybka, Gregory; Leventis, Greg; Schwartz, Lisa; Sanstad, Alan H.; Schiller, Steven

    2017-01-01

    The program administrator and total cost of saved energy allow comparison of the cost of efficiency across utilities, states, and program types, and can identify potential performance improvements. Comparing program administrator cost with the total cost of saved energy can indicate the degree to which programs leverage investment by participants. Based on reported total costs and savings information for U.S. utility efficiency programs from 2009 to 2013, we estimate the savings-weighted average total cost of saved electricity across 20 states at $0.046 per kilowatt-hour (kW h), comparing favorably with energy supply costs and retail rates. Programs targeted on the residential market averaged $0.030 per kW h compared to $0.053 per kW h for non-residential programs. Lighting programs, with an average total cost of $0.018 per kW h, drove lower savings costs in the residential market. We provide estimates for the most common program types and find that program administrators and participants on average are splitting the costs of efficiency in half. More consistent, standardized and complete reporting on efficiency programs is needed. Differing definitions and quantification of costs, savings and savings lifetimes pose challenges for comparing program results. Reducing these uncertainties could increase confidence in efficiency as a resource among planners and policymakers. - Highlights: • The cost of saved energy allows comparisons among energy resource investments. • Findings from the most expansive collection yet of total energy efficiency program costs. • The weighted average total cost of saved electricity was $0.046 for 20 states in 2009–2013. • Averages in the residential and non-residential sectors were $0.030 and $0.053 per kW h, respectively. • Results strongly indicate need for more consistent, reliable and complete reporting on efficiency programs.

  7. Approach of the estimation for the highest energy of the gamma rays

    International Nuclear Information System (INIS)

    Dumitrescu, Gheorghe

    2004-01-01

    In the last decade there was under debate the issue concerning the composition of the ultra high energy cosmic rays and some authors suggested that the light composition seems to be a relating issue. There was another debate concerning the limit of the energy of gamma rays. The bottom-up approaches suggest a limit at 10 15 eV. Some top-down approaches rise this limit at about 10 20 eV or above. The present paper provides an approach to estimate the limit of the energy of gamma rays using the recent paper of Claus W. Turtur. (author)

  8. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  9. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  10. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  11. Review of hardware cost estimation methods, models and tools applied to early phases of space mission planning

    Science.gov (United States)

    Trivailo, O.; Sippel, M.; Şekercioğlu, Y. A.

    2012-08-01

    The primary purpose of this paper is to review currently existing cost estimation methods, models, tools and resources applicable to the space sector. While key space sector methods are outlined, a specific focus is placed on hardware cost estimation on a system level, particularly for early mission phases during which specifications and requirements are not yet crystallised, and information is limited. For the space industry, cost engineering within the systems engineering framework is an integral discipline. The cost of any space program now constitutes a stringent design criterion, which must be considered and carefully controlled during the entire program life cycle. A first step to any program budget is a representative cost estimate which usually hinges on a particular estimation approach, or methodology. Therefore appropriate selection of specific cost models, methods and tools is paramount, a difficult task given the highly variable nature, scope as well as scientific and technical requirements applicable to each program. Numerous methods, models and tools exist. However new ways are needed to address very early, pre-Phase 0 cost estimation during the initial program research and establishment phase when system specifications are limited, but the available research budget needs to be established and defined. Due to their specificity, for vehicles such as reusable launchers with a manned capability, a lack of historical data implies that using either the classic heuristic approach such as parametric cost estimation based on underlying CERs, or the analogy approach, is therefore, by definition, limited. This review identifies prominent cost estimation models applied to the space sector, and their underlying cost driving parameters and factors. Strengths, weaknesses, and suitability to specific mission types and classes are also highlighted. Current approaches which strategically amalgamate various cost estimation strategies both for formulation and validation

  12. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  13. Estimating RASATI scores using acoustical parameters

    International Nuclear Information System (INIS)

    Agüero, P D; Tulli, J C; Moscardi, G; Gonzalez, E L; Uriz, A J

    2011-01-01

    Acoustical analysis of speech using computers has reached an important development in the latest years. The subjective evaluation of a clinician is complemented with an objective measure of relevant parameters of voice. Praat, MDVP (Multi Dimensional Voice Program) and SAV (Software for Voice Analysis) are some examples of software for speech analysis. This paper describes an approach to estimate the subjective characteristics of RASATI scale given objective acoustical parameters. Two approaches were used: linear regression with non-negativity constraints, and neural networks. The experiments show that such approach gives correct evaluations with ±1 error in 80% of the cases.

  14. A multi-method and multi-scale approach for estimating city-wide anthropogenic heat fluxes

    Science.gov (United States)

    Chow, Winston T. L.; Salamanca, Francisco; Georgescu, Matei; Mahalov, Alex; Milne, Jeffrey M.; Ruddell, Benjamin L.

    2014-12-01

    A multi-method approach estimating summer waste heat emissions from anthropogenic activities (QF) was applied for a major subtropical city (Phoenix, AZ). These included detailed, quality-controlled inventories of city-wide population density and traffic counts to estimate waste heat emissions from population and vehicular sources respectively, and also included waste heat simulations derived from urban electrical consumption generated by a coupled building energy - regional climate model (WRF-BEM + BEP). These component QF data were subsequently summed and mapped through Geographic Information Systems techniques to enable analysis over local (i.e. census-tract) and regional (i.e. metropolitan area) scales. Through this approach, local mean daily QF estimates compared reasonably versus (1.) observed daily surface energy balance residuals from an eddy covariance tower sited within a residential area and (2.) estimates from inventory methods employed in a prior study, with improved sensitivity to temperature and precipitation variations. Regional analysis indicates substantial variations in both mean and maximum daily QF, which varied with urban land use type. Average regional daily QF was ∼13 W m-2 for the summer period. Temporal analyses also indicated notable differences using this approach with previous estimates of QF in Phoenix over different land uses, with much larger peak fluxes averaging ∼50 W m-2 occurring in commercial or industrial areas during late summer afternoons. The spatio-temporal analysis of QF also suggests that it may influence the form and intensity of the Phoenix urban heat island, specifically through additional early evening heat input, and by modifying the urban boundary layer structure through increased turbulence.

  15. Estimating infertility prevalence in low-to-middle-income countries: an application of a current duration approach to Demographic and Health Survey data.

    Science.gov (United States)

    Polis, Chelsea B; Cox, Carie M; Tunçalp, Özge; McLain, Alexander C; Thoma, Marie E

    2017-05-01

    . Overall estimates for TTP >24 or >36 months dropped to 17.7% (95% CI: 15.7-20%) and 11.5% (95% CI: 10.2-13%), respectively. Subgroup analyses showed that estimates varied by age, coital frequency and fertility intentions, while being in a polygynous relationship showed minimal impact. The CD approach may be limited by assumptions on when exposure to risk of pregnancy began and methodologic assumptions required for estimation, which may be less accurate for particular subgroups or populations. Unrecognized pregnancies may have also biased our findings; however, we attempted to address this in our exclusion criteria. Limiting to married/cohabiting couples may have excluded women who are no longer in a relationship after being blamed for infertility. Although probably rare in this setting, we lack information on couples undergoing infertility treatment. Like other TTP measurement approaches, pregnancies resulting from contraceptive failure are not included, which may bias estimates. Nationally representative estimates of TTP and infertility based on a clinical definition of 12 months have been limited within developing countries. This approach represents a pragmatic advance in our ability to measure and monitor infertility in the developing world, with potentially far-reaching implications for policies and programs intended to address reproductive health. There are no competing interests and no financial support was provided for this study. Financial support for Open Access publication was provided by the World Health Organization. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.

  16. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    OpenAIRE

    Adrian Jinich; Dmitrij Rappoport; Ian Dunn; Benjamin Sanchez-Lengeling; Roberto Olivares-Amaya; Elad Noor; Arren Bar Even; Alán Aspuru-Guzik

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfe...

  17. A study of concept-based similarity approaches for recommending program examples

    Science.gov (United States)

    Hosseini, Roya; Brusilovsky, Peter

    2017-07-01

    This paper investigates a range of concept-based example recommendation approaches that we developed to provide example-based problem-solving support in the domain of programming. The goal of these approaches is to offer students a set of most relevant remedial examples when they have trouble solving a code comprehension problem where students examine a program code to determine its output or the final value of a variable. In this paper, we use the ideas of semantic-level similarity-based linking developed in the area of intelligent hypertext to generate examples for the given problem. To determine the best-performing approach, we explored two groups of similarity approaches for selecting examples: non-structural approaches focusing on examples that are similar to the problem in terms of concept coverage and structural approaches focusing on examples that are similar to the problem by the structure of the content. We also explored the value of personalized example recommendation based on student's knowledge levels and learning goal of the exercise. The paper presents concept-based similarity approaches that we developed, explains the data collection studies and reports the result of comparative analysis. The results of our analysis showed better ranking performance of the personalized structural variant of cosine similarity approach.

  18. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  19. Estimating a WTP-based value of a QALY: the 'chained' approach.

    Science.gov (United States)

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  1. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  2. Analyses of computer programs for the probabilistic estimation of design earthquake and seismological characteristics of the Korean Peninsula

    International Nuclear Information System (INIS)

    Lee, Gi Hwa

    1997-11-01

    The purpose of the present study is to develop predictive equations from simulated motions which are adequate for the Korean Peninsula and analyze and utilize the computer programs for the probabilistic estimation of design earthquakes. In part I of the report, computer programs for the probabilistic estimation of design earthquake are analyzed and applied to the seismic hazard characterizations in the Korean Peninsula. In part II of the report, available instrumental earthquake records are analyzed to estimate earthquake source characteristics and medium properties, which are incorporated into simulation process. And earthquake records are simulated by using the estimated parameters. Finally, predictive equations constructed from the simulation are given in terms of magnitude and hypocentral distances

  3. Photogrammetric Resection Approach Using Straight Line Features for Estimation of Cartosat-1 Platform Parameters

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2008-08-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. There are several other available methods using lines, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images are the main drawbacks of the classical approaches. The line based approach to overcome these problems includes usage of lines in the number of observations that can be provided, which improve significantly the overall system redundancy. This paper addresses mathematical model relating to both image and object reference system for solving the space resection problem which is generally used for upgrading the exterior orientation parameters. In order to solve the dynamic camera calibration parameters, a sequential estimator (Kalman Filtering is applied; in an iterative process to the image. For dynamic case, e.g. an image sequence of moving objects, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and basic physical sensor model for each instant of time. The proposed approach is tested with three real data sets and the result suggests that highly accurate space resection parameters can be obtained with or without using the control points and progressive processing time reduction.

  4. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  5. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  6. Closing the Education Gender Gap: Estimating the Impact of Girls' Scholarship Program in the Gambia

    Science.gov (United States)

    Gajigo, Ousman

    2016-01-01

    This paper estimates the impact of a school fee elimination program for female secondary students in The Gambia to reduce gender disparity in education. To assess the impact of the program, two nationally representative household surveys were used (1998 and 2002/2003). By 2002/2003, about half of the districts in the country had benefited from the…

  7. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    Science.gov (United States)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  8. Improved stove programs need robust methods to estimate carbon offsets

    OpenAIRE

    Johnson, Michael; Edwards, Rufus; Masera, Omar

    2010-01-01

    Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...

  9. TEACCH and SIT Approach Program in Children with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Maryam Abshirini

    2016-10-01

    Full Text Available Objective: Sensory Integration Therapy (SIT is one of the most commonly used treatment approaches for Autism Spectrum Disorders (ASD. Treatment and Education of Autistic and related Communication-handicapped Children (TEACCH is another less known approach in Iran. The aim of this study was to compare the effectiveness of SIT and TEACCH approaches in children with ASD. Design: The study design was quasi- experimental, which was conducted on 2014 in Autism center of Bushehr city, based in south of Iran. Method: Study participants were children aged 3 to 9 with normal IQ who were diagnosed with ASD. Intervention included SIT and TEACCH treatment approaches for a 6 months duration to two groups of children (n=20. One group did not receive any intervention during the 6 months. Main outcome was the total score of Autism Treatment Evaluation Checklist (ATEC. Results: There was no significant difference in ATEC score between the three groups at the base line. ATEC score was significantly different among three groups after intervention using one-way ANOVA test. Tukey test showed that TEACCH group had more improvement in autism score compared to SIT group. The results of ANCOVA test showed that 70% of variation in autism score is due to the interventional approaches. Conclusion: This study showed that TEACCH program was effective in Iranian culture as well, and can be used widely in Iranian Autism centers and TEACCH program was more effective than SIT program.

  10. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  11. The estimated economic burden of genital herpes in the United States. An analysis using two costing approaches

    Directory of Open Access Journals (Sweden)

    Fisman David N

    2001-06-01

    Full Text Available Abstract Background Only limited data exist on the costs of genital herpes (GH in the USA. We estimated the economic burden of GH in the USA using two different costing approaches. Methods The first approach was a cross-sectional survey of a sample of primary and secondary care physicians, analyzing health care resource utilization. The second approach was based on the analysis of a large administrative claims data set. Both approaches were used to generate the number of patients with symptomatic GH seeking medical treatment, the average medical expenditures and estimated national costs. Costs were valued from a societal and a third party payer's perspective in 1996 US dollars. Results In the cross-sectional study, based on an estimated 3.1 million symptomatic episodes per year in the USA, the annual direct medical costs were estimated at a maximum of $984 million. Of these costs, 49.7% were caused by drug expenditures, 47.7% by outpatient medical care and 2.6% by hospital costs. Indirect costs accounted for further $214 million. The analysis of 1,565 GH cases from the claims database yielded a minimum national estimate of $283 million direct medical costs. Conclusions GH appears to be an important public health problem from the health economic point of view. The observed difference in direct medical costs may be explained with the influence of compliance to treatment and possible undersampling of subpopulations in the claims data set. The present study demonstrates the validity of using different approaches in estimating the economic burden of a specific disease to the health care system.

  12. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  13. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  14. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  15. The Elements of Language Curriculum: A Systematic Approach to Program Development.

    Science.gov (United States)

    Brown, James Dean

    A systematic approach to second language curriculum development is outlined, enumerating the phases and activities involved in developing and implementing a sound and effective language program. The first chapter describes a system whereby all language teaching activities can be classified into approaches, syllabuses, techniques, exercises, or…

  16. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  17. Education Demographic and Geographic Estimates Program (EDGE): Locale Boundaries User's Manual. NCES 2016-012

    Science.gov (United States)

    Geverdt, Douglas E.

    2015-01-01

    The National Center for Education Statistics (NCES) Education Demographic and Geographic Estimates (EDGE) program develops geographic data to help policymakers, program administrators, and the public understand relationships between educational institutions and the communities they serve. One of the commonly used geographic data items is the NCES…

  18. Comparison between goal programming and cointegration approaches in enhanced index tracking

    Science.gov (United States)

    Lam, Weng Siew; Jamaan, Saiful Hafizah Hj.

    2013-04-01

    Index tracking is a popular form of passive fund management in stock market. Passive management is a buy-and-hold strategy that aims to achieve rate of return similar to the market return. Index tracking problem is a problem of reproducing the performance of a stock market index, without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio that minimizes risk or tracking error. An improved index tracking (enhanced index tracking) is a dual-objective optimization problem, a trade-off between maximizing the mean return and minimizing the tracking error. Enhanced index tracking aims to generate excess return over the return achieved by the index. The objective of this study is to compare the portfolio compositions and performances by using two different approaches in enhanced index tracking problem, which are goal programming and cointegration. The result of this study shows that the optimal portfolios for both approaches are able to outperform the Malaysia market index which is Kuala Lumpur Composite Index. Both approaches give different optimal portfolio compositions. Besides, the cointegration approach outperforms the goal programming approach because the cointegration approach gives higher mean return and lower risk or tracking error. Therefore, the cointegration approach is more appropriate for the investors in Malaysia.

  19. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  20. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  1. A new benchmark for pose estimation with ground truth from virtual reality

    DEFF Research Database (Denmark)

    Schlette, Christian; Buch, Anders Glent; Aksoy, Eren Erdal

    2014-01-01

    The development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation......, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi...

  2. Artificial neural network approach to spatial estimation of wind velocity data

    International Nuclear Information System (INIS)

    Oztopal, Ahmet

    2006-01-01

    In any regional wind energy assessment, equal wind velocity or energy lines provide a common basis for meaningful interpretations that furnish essential information for proper design purposes. In order to achieve regional variation descriptions, there are methods of optimum interpolation with classical weighting functions or variogram methods in Kriging methodology. Generally, the weighting functions are logically and geometrically deduced in a deterministic manner, and hence, they are imaginary first approximations for regional variability assessments, such as wind velocity. Geometrical weighting functions are necessary for regional estimation of the regional variable at a location with no measurement, which is referred to as the pivot station from the measurements of a set of surrounding stations. In this paper, weighting factors of surrounding stations necessary for the prediction of a pivot station are presented by an artificial neural network (ANN) technique. The wind speed prediction results are compared with measured values at a pivot station. Daily wind velocity measurements in the Marmara region from 1993 to 1997 are considered for application of the ANN methodology. The model is more appropriate for winter period daily wind velocities, which are significant for energy generation in the study area. Trigonometric point cumulative semivariogram (TPCSV) approach results are compared with the ANN estimations for the same set of data by considering the correlation coefficient (R). Under and over estimation problems in objective analysis can be avoided by the ANN approach

  3. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  4. Program Management Approach to the Territorial Development of Small Business

    Directory of Open Access Journals (Sweden)

    Natalia Aleksandrovna Knysh

    2016-06-01

    Full Text Available This article presents the results of the research of the application on a state level of the program management approach to the territorial development of small business. Studying the main mechanism of the state policy implementation in the sphere of small business on a regional level, the authors have revealed the necessity to take into account the territorial specificity while the government programs of small business development are being formed. The analysis of the national practice of utilizing the program management mechanism in the regional system of the government support of small entrepreneurship was conducted on the example of Omsk region. The results of the analysis have shown the inefficiency of the current support system for small business and have determined the need to create an integrated model of territorial programming, which would not only contribute to the qualitative development of small business, but also provide the functioning efficiency of program management mechanism. As a result, the authors have created the two-level model of the programming of the territorial development of small business, which allows to satisfy purposefully the needs of entrepreneurship taking into account the specificity of the internal and external environment of the region. The first level of the model is methodological one and it is based on the marketing approach (the concepts of place marketing and relationship marketing to the operation of the program management mechanism. The second level of the model is methodical one. It offers the combination of the flexible methods of management of programming procedure (benchmarking, foresight, crowdsourcing and outsourcing. The given model raises the efficiency of the management decisions of the state structures in the sphere of small business. Therefore, it is interesting for the government authorities, which are responsible for the regional and municipal support programs of small business, as well

  5. Estimating boiling water reactor decommissioning costs. A user's manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

    International Nuclear Information System (INIS)

    Bierschbach, M.C.

    1994-12-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user's manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning

  6. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Science.gov (United States)

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  7. [Efficacy of the program "Testas's (mis)adventures" to promote the deep approach to learning].

    Science.gov (United States)

    Rosário, Pedro; González-Pienda, Julio Antonio; Cerezo, Rebeca; Pinto, Ricardo; Ferreira, Pedro; Abilio, Lourenço; Paiva, Olimpia

    2010-11-01

    This paper provides information about the efficacy of a tutorial training program intended to enhance elementary fifth graders' study processes and foster their deep approaches to learning. The program "Testas's (mis)adventures" consists of a set of books in which Testas, a typical student, reveals and reflects upon his life experiences during school years. These life stories are nothing but an opportunity to present and train a wide range of learning strategies and self-regulatory processes, designed to insure students' deeper preparation for present and future learning challenges. The program has been developed along a school year, in a one hour weekly tutorial sessions. The training program had a semi-experimental design, included an experimental group (n=50) and a control one (n=50), and used pre- and posttest measures (learning strategies' declarative knowledge, learning approaches and academic achievement). Data suggest that the students enrolled in the training program, comparing with students in the control group, showed a significant improvement in their declarative knowledge of learning strategies and in their deep approach to learning, consequently lowering their use of a surface approach. In spite of this, in what concerns to academic achievement, no statistically significant differences have been found.

  8. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  9. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  10. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  11. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  12. Lift/cruise fan V/STOL technology aircraft design definition study. Volume 3: Development program and budgetary estimates

    Science.gov (United States)

    Obrien, W. J.

    1976-01-01

    The aircraft development program, budgetary estimates in CY 1976 dollars, and cost reduction program variants are presented. Detailed cost matrices are also provided for the mechanical transmission system, turbotip transmission system, and the thrust vector hoods and yaw doors.

  13. PEDIC - A COMPUTER PROGRAM TO ESTIMATE THE EFFECT OF EVACUATION ON POPULATION EXPOSURE FOLLOWING ACUTE RADIONUCLIDE RELEASES TO THE ATOMSPHERE

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D. L.; Peloquin, R. A.

    1981-01-01

    The computer program PEDIC is described for estimation of the effect of evacuation on population exposure. The program uses joint frequency, annual average meteorological data and a simple population evacuation model to estimate exposure reduction due to movement of people away from radioactive plumes following an acute release of activity. Atmospheric dispersion is based on a sector averaged Gaussian model with consideration of plume rise and building wake effects. Appendices to the report provide details of the computer program design, a program listing, input card preparation instructions and sample problems.

  14. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  15. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  16. Chance-constrained programming approach to natural-gas curtailment decisions

    Energy Technology Data Exchange (ETDEWEB)

    Guldmann, J M

    1981-10-01

    This paper presents a modeling methodology for the determination of optimal-curtailment decisions by a gas-distribution utility during a chronic gas-shortage situation. Based on the end-use priority approach, a linear-programming model is formulated, that reallocates the available gas supply among the utility's customers while minimizing fuel switching, unemployment, and utility operating costs. This model is then transformed into a chance-constrained program in order to account for the weather-related variability of the gas requirements. The methodology is applied to the East Ohio Gas Company. 16 references, 2 figures, 3 tables.

  17. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  18. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    David Cabedo, J.; Moya, Ismael

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach, developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification

  19. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    Cabedo, J.D.; Moya, I.

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach. developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification. (author)

  20. Use of risk projection models to estimate mortality and incidence from radiation-induced breast cancer in screening programs

    International Nuclear Information System (INIS)

    Ramos, M; Ferrer, S; Villaescusa, J I; Verdu, G; Salas, M D; Cuevas, M D

    2005-01-01

    The authors report on a method to calculate radiological risks, applicable to breast screening programs and other controlled medical exposures to ionizing radiation. In particular, it has been applied to make a risk assessment in the Valencian Breast Cancer Early Detection Program (VBCEDP) in Spain. This method is based on a parametric approach, through Markov processes, of hazard functions for radio-induced breast cancer incidence and mortality, with mean glandular breast dose, attained age and age-at-exposure as covariates. Excess relative risk functions of breast cancer mortality have been obtained from two different case-control studies exposed to ionizing radiation, with different follow-up time: the Canadian Fluoroscopy Cohort Study (1950-1987) and the Life Span Study (1950-1985 and 1950-1990), whereas relative risk functions for incidence have been obtained from the Life Span Study (1958-1993), the Massachusetts tuberculosis cohorts (1926-1985 and 1970-1985), the New York post-partum mastitis patients (1930-1981) and the Swedish benign breast disease cohort (1958-1987). Relative risks from these cohorts have been transported to the target population undergoing screening in the Valencian Community, a region in Spain with about four and a half million inhabitants. The SCREENRISK software has been developed to estimate radiological detriments in breast screening. Some hypotheses corresponding to different screening conditions have been considered in order to estimate the total risk associated with a woman who takes part in all screening rounds. In the case of the VBCEDP, the total radio-induced risk probability for fatal breast cancer is in a range between [5 x 10 -6 , 6 x 10 -4 ] versus the natural rate of dying from breast cancer in the Valencian Community which is 9.2 x 10 -3 . The results show that these indicators could be included in quality control tests and could be adequate for making comparisons between several screening programs

  1. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  2. Practical approaches to implementing facility wide equipment strengthening programs

    International Nuclear Information System (INIS)

    Kincaid, R.H.; Smietana, E.A.

    1989-01-01

    Equipment strengthening programs typically focus on components required to ensure operability of safety related equipment or to prevent the release of toxic substances. Survival of non-safety related equipment may also be crucial to ensure rapid recovery and minimize business interruption losses. Implementing a strengthening program for non-safety related equipment can be difficult due to the large amounts of equipment involved and limited budget availability. EQE has successfully implemented comprehensive equipment strengthening programs for a number of California corporations. Many of the lessons learned from these projects are applicable to DOE facilities. These include techniques for prioritizing equipment and three general methodologies for anchoring equipment. Pros and cons of each anchorage approach are presented along with typical equipment strengthening costs

  3. A Genetic-Algorithms-Based Approach for Programming Linear and Quadratic Optimization Problems with Uncertainty

    Directory of Open Access Journals (Sweden)

    Weihua Jin

    2013-01-01

    Full Text Available This paper proposes a genetic-algorithms-based approach as an all-purpose problem-solving method for operation programming problems under uncertainty. The proposed method was applied for management of a municipal solid waste treatment system. Compared to the traditional interactive binary analysis, this approach has fewer limitations and is able to reduce the complexity in solving the inexact linear programming problems and inexact quadratic programming problems. The implementation of this approach was performed using the Genetic Algorithm Solver of MATLAB (trademark of MathWorks. The paper explains the genetic-algorithms-based method and presents details on the computation procedures for each type of inexact operation programming problems. A comparison of the results generated by the proposed method based on genetic algorithms with those produced by the traditional interactive binary analysis method is also presented.

  4. Estimating the Risk of Tropical Cyclone Characteristics Along the United States Gulf of Mexico Coastline Using Different Statistical Approaches

    Science.gov (United States)

    Trepanier, J. C.; Ellis, K.; Jagger, T.; Needham, H.; Yuan, J.

    2017-12-01

    Tropical cyclones, with their high wind speeds, high rainfall totals and deep storm surges, frequently strike the United States Gulf of Mexico coastline influencing millions of people and disrupting off shore economic activities. Events, such as Hurricane Katrina in 2005 and Hurricane Isaac in 2012, can be physically different but still provide detrimental effects due to their locations of influence. There are a wide variety of ways to estimate the risk of occurrence of extreme tropical cyclones. Here, the combined risk of tropical cyclone storm surge and nearshore wind speed using a statistical copula is provided for 22 Gulf of Mexico coastal cities. Of the cities considered, Bay St. Louis, Mississippi has the shortest return period for a tropical cyclone with at least a 50 m s-1 nearshore wind speed and a three meter surge (19.5 years, 17.1-23.5). Additionally, a multivariate regression model is provided estimating the compound effects of tropical cyclone tracks, landfall central pressure, the amount of accumulated precipitation, and storm surge for five locations around Lake Pontchartrain in Louisiana. It is shown the most intense tropical cyclones typically approach from the south and a small change in the amount of rainfall or landfall central pressure leads to a large change in the final storm surge depth. Data are used from the National Hurricane Center, U-Surge, SURGEDAT, and Cooperative Observer Program. The differences in the two statistical approaches are discussed, along with the advantages and limitations to each. The goal of combining the results of the two studies is to gain a better understanding of the most appropriate risk estimation technique for a given area.

  5. A New Approach to Commercialization of NASA's Human Research Program Technologies, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This Phase I SBIR proposal describes, "A New Approach to Commercialization of NASA's Human Research Program Technologies." NASA has a powerful research program that...

  6. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  7. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  8. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  9. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying; Chang, Xiaohui; Guan, Yongtao

    2018-01-01

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  10. Flexible and efficient estimating equations for variogram estimation

    KAUST Repository

    Sun, Ying

    2018-01-11

    Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.

  11. A hybrid system approach to airspeed, angle of attack and sideslip estimation in Unmanned Aerial Vehicles

    KAUST Repository

    Shaqura, Mohammad; Claudel, Christian

    2015-01-01

    , low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip

  12. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    Science.gov (United States)

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  13. Marketing the dental hygiene program. A public relations approach.

    Science.gov (United States)

    Nielsen, C

    1989-09-01

    Since 1980 there has been a decline in dental hygiene enrollment and graduates. Marketing dental hygiene programs, a recognized component of organizational survival, is necessary to meet societal demands for dental hygiene care now and in the future. The purpose of this article is to examine theories on the marketing of education and to describe a systematic approach to marketing dental hygiene education. Upon examination of these theories, the importance of analysis, planning, implementation, and evaluation/control of a marketing program is found to be essential. Application of the four p's of marketing--product/service, price, place, and promotion--is necessary to achieve marketing's goals and objectives and ultimately the program's mission and goals. Moreover, projecting a quality image of the dental hygiene program and the profession of dental hygiene must be included in the overall marketing plan. Results of an effective marketing plan should increase the number of quality students graduating from the dental hygiene program, ultimately contributing to the quality of oral health care in the community.

  14. Nuclear waste: A look at current use of funds and cost estimates for the future

    International Nuclear Information System (INIS)

    1987-01-01

    The Department of Energy has revised its long-range cost estimates for the disposal of spent nuclear fuel and other highly radioactive waste from about $20 billion to between $21 billion and $41 billion. Delays in meeting some program milestones have added to the costs of the program and consequently DOE has proposed a 5-year delay for the first repository to come on-line. These program uncertainties will limit confidence in the estimates for the next several years. One such uncertainty is the estimated quantity of spent fuel for disposal. DOE's estimating approach overstates the amount of spent fuel that utilities will generate and the fees that they will pay into the Nuclear Waste Fund. As a result, DOE may not be collecting fees at a rate that will cover total program costs and may be overbuilding the waste system

  15. BAESNUM, a conversational computer program for the Bayesian estimation of a parameter by a numerical method

    International Nuclear Information System (INIS)

    Colombo, A.G.; Jaarsma, R.J.

    1982-01-01

    This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described

  16. Dynamic Programming Approaches for the Traveling Salesman Problem with Drone

    OpenAIRE

    Bouman, Paul; Agatz, Niels; Schmidt, Marie

    2017-01-01

    markdownabstractA promising new delivery model involves the use of a delivery truck that collaborates with a drone to make deliveries. Effectively combining a drone and a truck gives rise to a new planning problem that is known as the Traveling Salesman Problem with Drone (TSP-D). This paper presents an exact solution approach for the TSP-D based on dynamic programming and present experimental results of different dynamic programming based heuristics. Our numerical experiments show that our a...

  17. Approaches in estimation of external cost for fuel cycles in the ExternE project

    International Nuclear Information System (INIS)

    Afanas'ev, A.A.; Maksimenko, B.N.

    1998-01-01

    The purposes, content and main results of studies realized within the frameworks of the International Project ExternE which is the first comprehensive attempt to develop general approach to estimation of external cost for different fuel cycles based on utilization of nuclear and fossil fuels, as well as on renewable power sources are discussed. The external cost of a fuel cycle is treated as social and environmental expenditures which are not taken into account by energy producers and consumers, i.e. these are expenditures not included into commercial cost nowadays. The conclusion on applicability of the approach suggested for estimation of population health hazards and environmental impacts connected with electric power generation growth (expressed in money or some other form) is made

  18. Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Douglas B.; Anderson, Dave M.; Belzer, David B.; Cort, Katherine A.; Dirks, James A.; Hostick, Donna J.

    2004-06-18

    The requirements of the Government Performance and Results Act (GPRA) of 1993 mandate the reporting of outcomes expected to result from programs of the Federal government. The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official metrics for its 11 major programs using its Office of Planning, Budget Formulation, and Analysis (OPBFA). OPBFA conducts an annual integrated modeling analysis to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. Two of EERE’s major programs include the Building Technologies Program (BT) and Office of Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports the OPBFA effort by developing the program characterizations and other market information affecting these programs that is necessary to provide input to the EERE integrated modeling analysis. Throughout the report we refer to these programs as “buildings-related” programs, because the approach is not limited in application to BT or WIP. To adequately support OPBFA in the development of official GPRA metrics, PNNL communicates with the various activities and projects in BT and WIP to determine how best to characterize their activities planned for the upcoming budget request. PNNL then analyzes these projects to determine what the results of the characterizations would imply for energy markets, technology markets, and consumer behavior. This is accomplished by developing nonintegrated estimates of energy, environmental, and financial benefits (i.e., outcomes) of the technologies and practices expected to result from the budget request. These characterizations and nonintegrated modeling results are provided to OPBFA as inputs to the official benefits estimates developed for the Federal Budget. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits

  19. Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach

    Directory of Open Access Journals (Sweden)

    C. Nabert

    2017-05-01

    Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.

  20. Using GIS to evaluate a fire safety program in North Carolina.

    Science.gov (United States)

    Dudley, Thomas; Creppage, Kathleen; Shanahan, Meghan; Proescholdbell, Scott

    2013-10-01

    Evaluating program impact is a critical aspect of public health. Utilizing Geographic Information Systems (GIS) is a novel way to evaluate programs which try to reduce residential fire injuries and deaths. The purpose of this study is to demonstrate the application of GIS within the evaluation of a smoke alarm installation program in North Carolina. This approach incorporates national fire incident data which, when linked with program data, provides a clear depiction of the 10 years impact of the Get Alarmed, NC! program and estimates the number of potential lives saved. We overlapped Get Alarmed, NC! program installation data with national information on fires using GIS to identify homes that experienced a fire after an alarm was installed and calculated potential lives saved based on program documentation and average housing occupancy. We found that using GIS was an efficient and quick way to match addresses from two distinct sources. From this approach we estimated that between 221 and 384 residents were potentially saved due to alarms installed in their homes by Get Alarmed, NC!. Compared with other program evaluations that require intensive and costly participant telephone surveys and/or in-person interviews, the GIS approach is inexpensive, quick, and can easily analyze large disparate datasets. In addition, it can be used to help target the areas most at risk from the onset. These benefits suggest that by incorporating previously unutilized data, the GIS approach has the potential for broader applications within public health program evaluation.

  1. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  2. Big Numbers about Small Children: Estimating the Economic Benefits of Addressing Undernutrition.

    Science.gov (United States)

    Alderman, Harold; Behrman, Jere R; Puett, Chloe

    2017-02-01

    Different approaches have been used to estimate the economic benefits of reducing undernutrition and to estimate the costs of investing in such programs on a global scale. While many of these studies are ultimately based on evidence from well-designed efficacy trials, all require a number of assumptions to project the impact of such trials to larger populations and to translate the value of the expected improvement in nutritional status into economic terms. This paper provides a short critique of some approaches to estimating the benefits of investments in child nutrition and then presents an alternative set of estimates based on different core data. These new estimates reinforce the basic conclusions of the existing literature: the economic value from reducing undernutrition in undernourished populations is likely to be substantial.

  3. My Family-Study, Early-Onset Substance use Prevention Program: An Application of Intervention Mapping Approach

    Directory of Open Access Journals (Sweden)

    Mehdi Mirzaei-Alavijeh

    2017-03-01

    Full Text Available Background and Objectives: Based on different studies, substance use is one of the health problems in the Iranian society. The prevalence of substance use is on a growing trend; moreover, the age of the onset of substance use has declined to early adolescence and even lower. Regarding this, the present study aimed to develop a family-based early-onset substance use prevention program in children (My Family-Study by using intervention mapping approach. Materials and Methods: This study descirbes the research protocol during which the intervention mapping approach was used as a framework to develop My Family-Study. In this study, six steps of intervention mapping were completed. Interviews with experts and literature review fulfilled the need assessment. In the second step, the change objectivs were rewritten based on the intersection of the performance objectives and the determinants associated in the matrices. After designing the program and planning the implementation of the intervention, the evaluation plan of the program was accomplished. Results: The use of intervention mapping approach facilitated the develop-pment of a systematic as well as theory- and evidence-based program. Moreover, this approach was helful in the determination of outcomes, performance and change objectives, determinants, theoretical methods, practical application, intervention, dissemination, and evaluation program. Conclusions: The intervention mapping provided a systematic as well as theory- and evidence-based approach to develop a quality continuing health promotion program.

  4. Controller design approach based on linear programming.

    Science.gov (United States)

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  5. COMPETENCE-BASED APPROACH TO MODELLING STRUCTURES OF THE MAIN EDUCATIONAL PROGRAM

    Directory of Open Access Journals (Sweden)

    V. A. Gerasimova

    2015-01-01

    Full Text Available By the analysis results of scientific works in the field of competence-based approach in education authors proved need of computer support of the planning and development stage of the main educational program, they developed the main educational program structure automatic formation model on the graphs basis, offered the integrated criterion of an discipline assessment and developed a strategic map of a discipline complex assessment. The executed theoretical researches are a basis for creation of the main educational program planning and development support automated system.

  6. Empirical estimates in stochastic programs with probability and second order stochastic dominance constraints

    Czech Academy of Sciences Publication Activity Database

    Omelchenko, Vadym; Kaňková, Vlasta

    2015-01-01

    Roč. 84, č. 2 (2015), s. 267-281 ISSN 0862-9544 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : Stochastic programming problems * empirical estimates * light and heavy tailed distributions * quantiles Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/E/omelchenko-0454495.pdf

  7. Multilevel approach to mentoring in the Research Experiences for Undergraduates programs

    Science.gov (United States)

    Bonine, K. E.; Dontsova, K.; Pavao-Zuckerman, M.; Paavo, B.; Hogan, D.; Oberg, E.; Gay, J.

    2015-12-01

    This presentation focuses on different types of mentoring for students participating in Research Experiences for Undergraduates programs with examples, including some new approaches, from The Environmental and Earth Systems Research Experiences for Undergraduates Program at Biosphere 2. While traditional faculty mentors play essential role in students' development as researchers and professionals, other formal and informal mentoring can be important component of the REU program and student experiences. Students receive mentoring from program directors, coordinators, and on site undergraduate advisors. While working on their research projects, REU students receive essential support and mentoring from undergraduate and graduate students and postdoctoral scientists in the research groups of their primary mentors. Cohort living and group activities give multiple opportunities for peer mentoring where each student brings their own strengths and experiences to the group. Biosphere 2 REU program puts strong emphasis on teaching students to effectively communicate their research to public. In order to help REUs learn needed skills the outreach personnel at Biosphere 2 mentor and advise students both in groups and individually, in lecture format and by personal example, on best outreach approaches in general and on individual outreach projects students develop. To further enhance and strengthen outreach mentoring we used a novel approach of blending cohort of REU students with the Cal Poly STAR (STEM Teacher And Researcher) Program fellows, future K-12 STEM teachers who are gaining research experience at Biosphere 2. STAR fellows live together with the REU students and participate with them in professional development activities, as well as perform research side by side. Educational background and experiences gives these students a different view and better preparation and tools to effectively communicate and adapt science to lay audiences, a challenge commonly facing

  8. Estimating data from figures with a Web-based program: Considerations for a systematic review.

    Science.gov (United States)

    Burda, Brittany U; O'Connor, Elizabeth A; Webber, Elizabeth M; Redmond, Nadia; Perdue, Leslie A

    2017-09-01

    Systematic reviewers often encounter incomplete or missing data, and the information desired may be difficult to obtain from a study author. Thus, systematic reviewers may have to resort to estimating data from figures with little or no raw data in a study's corresponding text or tables. We discuss a case study in which participants used a publically available Web-based program, called webplotdigitizer, to estimate data from 2 figures. We evaluated and used the intraclass coefficient and the accuracy of the estimates to the true data to inform considerations when using estimated data from figures in systematic reviews. The estimates for both figures were consistent, although the distribution of estimates in the figure of a continuous outcome was slightly higher. For the continuous outcome, the percent difference ranged from 0.23% to 30.35% while the percent difference of the event rate ranged from 0.22% to 8.92%. For both figures, the intraclass coefficient was excellent (>0.95). Systematic reviewers should consider and be transparent when estimating data from figures when the information cannot be obtained from study authors and perform sensitivity analyses of pooled results to reduce bias. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  10. Economic Evaluation of a Comprehensive Teenage Pregnancy Prevention Program: Pilot Program

    Science.gov (United States)

    Rosenthal, Marjorie S.; Ross, Joseph S.; Bilodeau, RoseAnne; Richter, Rosemary S.; Palley, Jane E.; Bradley, Elizabeth H.

    2011-01-01

    Background Previous research has suggested that comprehensive teenage pregnancy prevention programs that address sexual education and life skills development and provide academic are effective in reducing births among enrolled teenagers. However, there have been limited data on costs and cost-effectiveness of such programs. Objectives To use a community-based participatory research approach, to develop estimates of the cost-benefit of the Pathways/Senderos Center, a comprehensive neighborhood-based program to prevent unintended pregnancies and promote positive development for adolescents. Methods Using data from 1997-2003, we conducted an in-time intervention analysis to determine program cost-benefit while teenagers were enrolled and then used an extrapolation analysis to estimate accyrred economibc benefits and cost-benefit up to age 30. Results The program operating costs totaled $3,228,152.59 and reduced the teenage childbearing rate from 94.10 to 40.00 per 1000 teenage females, averting $52,297.84 in total societal costs, with an economic benefit to society from program participation of $2,673,153.11. Therefore, total costs to society exceeded economic benefits by $559,677.05, or $1,599.08 per adolescent per year. In an extrapolation analysis, benefits to society exceed costs by $10,474.77 per adolescent per year by age 30 on average, with social benefits outweighing total social costs by age 20.1. Conclusions We estimate that this comprehensive teenage pregnancy prevention program would provide societal economic benefits once participants are young adults, suggesting the need to expand beyond pilot demonstrations and evaluate the long-range cost-effectiveness of similarly comprehensive programs when implemented more widely in high-risk neighborhoods. PMID:19896030

  11. Economic evaluation of a comprehensive teenage pregnancy prevention program: pilot program.

    Science.gov (United States)

    Rosenthal, Marjorie S; Ross, Joseph S; Bilodeau, Roseanne; Richter, Rosemary S; Palley, Jane E; Bradley, Elizabeth H

    2009-12-01

    Previous research has suggested that comprehensive teenage pregnancy prevention programs that address sexual education and life skills development and provide academic support are effective in reducing births among enrolled teenagers. However, there have been limited data on the costs and cost effectiveness of such programs. The study used a community-based participatory research approach to develop estimates of the cost-benefit of the Pathways/Senderos Center, a comprehensive neighborhood-based program to prevent unintended pregnancies and promote positive development for adolescents. Using data from 1997-2003, an in-time intervention analysis was conducted to determine program cost-benefit while teenagers were enrolled; an extrapolation analysis was then used to estimate accrued economic benefits and cost-benefit up to age 30 years. The program operating costs totaled $3,228,152.59 and reduced the teenage childbearing rate from 94.10 to 40.00 per 1000 teenage girls, averting $52,297.84 in total societal costs, with an economic benefit to society from program participation of $2,673,153.11. Therefore, total costs to society exceeded economic benefits by $559,677.05, or $1599.08 per adolescent per year. In an extrapolation analysis, benefits to society exceed costs by $10,474.77 per adolescent per year by age 30 years on average, with social benefits outweighing total social costs by age 20.1 years. This comprehensive teenage pregnancy prevention program is estimated to provide societal economic benefits once participants are young adults, suggesting the need to expand beyond pilot demonstrations and evaluate the long-range cost effectiveness of similarly comprehensive programs when they are implemented more widely in high-risk neighborhoods.

  12. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    Science.gov (United States)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  13. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  14. Cost Estimation and Control for Flight Systems

    Science.gov (United States)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  15. Uncertainty analysis for effluent trading planning using a Bayesian estimation-based simulation-optimization modeling approach.

    Science.gov (United States)

    Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J

    2017-06-01

    In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision

  16. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    Science.gov (United States)

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  17. Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming

    Science.gov (United States)

    Yeh, K.; Wei, H.; Chen, L.; Liu, G.

    2010-12-01

    Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming Keh-Chia Yeha, Hsiao-Ping Weia,d, Li Chenb, and Gin-Rong Liuc a Department of Civil Engineering, National Chiao Tung University, Hsinchu, Taiwan, 300, R.O.C. b Department of Civil Engineering and Engineering Informatics, Chung Hua University, Hsinchu, Taiwan, 300, R.O.C. c Center for Space and Remote Sensing Research, National Central University, Tao-Yuan, Taiwan, 320, R.O.C. d National Science and Technology Center for Disaster Reduction, Taipei County, Taiwan, 231, R.O.C. Abstract This paper proposes an improved multi-run genetic programming (GP) and applies it to predict the rainfall using meteorological satellite data. GP is a well-known evolutionary programming and data mining method, used to automatically discover the complex relationships among nonlinear systems. The main advantage of GP is to optimize appropriate types of function and their associated coefficients simultaneously. This study makes an improvement to enhance escape ability from local optimums during the optimization procedure. The GP continuously runs several times by replacing the terminal nodes at the next run with the best solution at the current run. The current novel model improves GP, obtaining a highly nonlinear mathematical equation to estimate the rainfall. In the case study, this improved GP described above combining with SSM/I satellite data is employed to establish a suitable method for estimating rainfall at sea surface during typhoon periods. These estimated rainfalls are then verified with the data from four rainfall stations located at Peng-Jia-Yu, Don-Gji-Dao, Lan-Yu, and Green Island, which are four small islands around Taiwan. From the results, the improved GP can generate sophisticated and accurate nonlinear mathematical equation through two-run learning procedures which outperforms the traditional multiple linear regression, empirical equations and back-propagated network

  18. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    International Nuclear Information System (INIS)

    Attanasi, Emil D.; Coburn, Timothy C.

    2004-01-01

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis

  19. A simplified approach to estimating the distribution of occasionally-consumed dietary components, applied to alcohol intake

    Directory of Open Access Journals (Sweden)

    Julia Chernova

    2016-07-01

    Full Text Available Abstract Background Within-person variation in dietary records can lead to biased estimates of the distribution of food intake. Quantile estimation is especially relevant in the case of skewed distributions and in the estimation of under- or over-consumption. The analysis of the intake distributions of occasionally-consumed foods presents further challenges due to the high frequency of zero records. Two-part mixed-effects models account for excess-zeros, daily variation and correlation arising from repeated individual dietary records. In practice, the application of the two-part model with random effects involves Monte Carlo (MC simulations. However, these can be time-consuming and the precision of MC estimates depends on the size of the simulated data which can hinder reproducibility of results. Methods We propose a new approach based on numerical integration as an alternative to MC simulations to estimate the distribution of occasionally-consumed foods in sub-populations. The proposed approach and MC methods are compared by analysing the alcohol intake distribution in a sub-population of individuals at risk of developing metabolic syndrome. Results The rate of convergence of the results of MC simulations to the results of our proposed method is model-specific, depends on the number of draws from the target distribution, and is relatively slower at the tails of the distribution. Our data analyses also show that model misspecification can lead to incorrect model parameter estimates. For example, under the wrong model assumption of zero correlation between the components, one of the predictors turned out as non-significant at 5 % significance level (p-value 0.062 but it was estimated as significant in the correctly specified model (p-value 0.016. Conclusions The proposed approach for the analysis of the intake distributions of occasionally-consumed foods provides a quicker and more precise alternative to MC simulation methods, particularly in the

  20. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  1. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    Science.gov (United States)

    Hayes, Daniel J.; Turner, David P.; Stinson, Graham; McGuire, A. David; Wei, Yaxing; West, Tristram O.; Heath, Linda S.; de Jong, Bernardus; McConkey, Brian G.; Birdsey, Richard A.; Kurz, Werner A.; Jacobson, Andrew R.; Huntzinger, Deborah N.; Pan, Yude; Post, W. Mac; Cook, Robert B.

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000–2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2, while accounting for lateral transfers of forest and crop products as well as their eventual emissions. The total NEE estimate of a -327 ± 252 TgC yr-1 sink for NA was driven primarily by CO2 uptake in the Forest Lands sector (-248 TgC yr-1), largely in the Northwest and Southeast regions of the US, and in the Crop Lands sector (-297 TgC yr-1), predominantly in the Midwest US states. These sinks are counteracted by the carbon source estimated for the Other Lands sector (+218 TgC yr-1), where much of the forest and crop products are assumed to be returned to the atmosphere (through livestock and human consumption). The ecosystems of Mexico are estimated to be a small net source (+18 TgC yr-1) due to land use change between 1993 and 2002. We compare these inventory-based estimates with results from a suite of terrestrial biosphere and atmospheric inversion models, where the mean continental-scale NEE estimate for each ensemble is -511 TgC yr-1 and -931 TgC yr-1, respectively. In the modeling approaches, all sectors, including Other Lands, were generally estimated to be a carbon sink, driven in part by assumed CO2 fertilization and/or lack of consideration of carbon sources from disturbances and product emissions. Additional fluxes not measured by the inventories, although highly uncertain, could add an additional -239 TgC yr-1 to the inventory-based NA sink estimate, thus suggesting some convergence with the modeling approaches.

  2. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    Science.gov (United States)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  3. Free-Roaming Dog Population Estimation and Status of the Dog Population Management and Rabies Control Program in Dhaka City, Bangladesh

    Science.gov (United States)

    Tenzin, Tenzin; Ahmed, Rubaiya; Debnath, Nitish C.; Ahmed, Garba; Yamage, Mat

    2015-01-01

    Beginning January 2012, a humane method of dog population management using a Catch-Neuter-Vaccinate-Release (CNVR) program was implemented in Dhaka City, Bangladesh as part of the national rabies control program. To enable this program, the size and distribution of the free-roaming dog population needed to be estimated. We present the results of a dog population survey and a pilot assessment of the CNVR program coverage in Dhaka City. Free-roaming dog population surveys were undertaken in 18 wards of Dhaka City on consecutive days using mark-resight methods. Data was analyzed using Lincoln-Petersen index-Chapman correction methods. The CNVR program was assessed over the two years (2012–2013) whilst the coverage of the CNVR program was assessed by estimating the proportion of dogs that were ear-notched (processed dogs) via dog population surveys. The free-roaming dog population was estimated to be 1,242 (95 % CI: 1205–1278) in the 18 sampled wards and 18,585 dogs in Dhaka City (52 dogs/km2) with an estimated human-to-free-roaming dog ratio of 828:1. During the two year CNVR program, a total of 6,665 dogs (3,357 male and 3,308 female) were neutered and vaccinated against rabies in 29 of the 92 city wards. A pilot population survey indicated a mean CNVR coverage of 60.6% (range 19.2–79.3%) with only eight wards achieving > 70% coverage. Given that the coverage in many neighborhoods was below the WHO-recommended threshold level of 70% for rabies eradications and since the CNVR program takes considerable time to implement throughout the entire Dhaka City area, a mass dog vaccination program in the non-CNVR coverage area is recommended to create herd immunity. The findings from this study are expected to guide dog population management and the rabies control program in Dhaka City and elsewhere in Bangladesh. PMID:25978406

  4. The Expected Loss in the Discretization of Multistage Stochastic Programming Problems - Estimation and Convergence Rate

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2009-01-01

    Roč. 165, č. 1 (2009), s. 29-45 ISSN 0254-5330 R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : multistage stochastic programming problems * approximation * discretization * Monte Carlo Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.961, year: 2009 http://library.utia.cas.cz/separaty/2008/E/smid-the expected loss in the discretization of multistage stochastic programming problems - estimation and convergence rate.pdf

  5. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystems Concepts. Volume 3; Program Cost Estimates

    Science.gov (United States)

    Levack, Daniel J. H.

    2000-01-01

    The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.

  6. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  7. 76 FR 55673 - Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...

    Science.gov (United States)

    2011-09-08

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9460-8; Docket ID No. EPA-HQ-ORD-2011-0485] Vulnerability... titled, Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...) and Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...

  8. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  9. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    Science.gov (United States)

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  10. Structured-Exercise-Program (SEP): An Effective Training Approach to Key Healthcare Professionals

    Science.gov (United States)

    Miazi, Mosharaf H.; Hossain, Taleb; Tiroyakgosi, C.

    2014-01-01

    Structured exercise program is an effective approach to technology dependent resource limited healthcare area for professional training. The result of a recently conducted data analysis revealed this. The aim of the study is to know the effectiveness of the applied approach that was designed to observe the level of adherence to newly adopted…

  11. USING OF TASK APPROACH METHOD WHILE TEACHING PROGRAMMING TO THE FUTURE INFORMATICS TEACHERS

    Directory of Open Access Journals (Sweden)

    Oleksandr M. Kryvonos

    2014-04-01

    Full Text Available This article is dedicated to the problem of teaching programming to the future informatics teachers from the standpoint of competence approach in teaching. The article defines the role and the place of task approach in the process of teaching the module on “Procedure programming”, which is the part of the programming course; it scrutinizes the systematization of levels of tasks, which are proposed by D. Toleengerov. The article describes the levels of complexity of tasks (reproductive, partially searching, research (creative, which are used in the formation of methodological provision for programming course. It also presents the examples of tasks of specific topics to solve which a student should have habits which are crucial for informational communicational technological competence.

  12. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  13. Estimating intervention effects of prevention programs: accounting for noncompliance.

    Science.gov (United States)

    Stuart, Elizabeth A; Perry, Deborah F; Le, Huynh-Nhu; Ialongo, Nicholas S

    2008-12-01

    Individuals not fully complying with their assigned treatments is a common problem encountered in randomized evaluations of behavioral interventions. Treatment group members rarely attend all sessions or do all "required" activities; control group members sometimes find ways to participate in aspects of the intervention. As a result, there is often interest in estimating both the effect of being assigned to participate in the intervention, as well as the impact of actually participating and doing all of the required activities. Methods known broadly as "complier average causal effects" (CACE) or "instrumental variables" (IV) methods have been developed to estimate this latter effect, but they are more commonly applied in medical and treatment research. Since the use of these statistical techniques in prevention trials has been less widespread, many prevention scientists may not be familiar with the underlying assumptions and limitations of CACE and IV approaches. This paper provides an introduction to these methods, described in the context of randomized controlled trials of two preventive interventions: one for perinatal depression among at-risk women and the other for aggressive disruptive behavior in children. Through these case studies, the underlying assumptions and limitations of these methods are highlighted.

  14. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    Science.gov (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  15. Reinventing the Wheel: One Program's Approach to Redesign of Didactic Courses.

    Science.gov (United States)

    Hudak, Nicholas M; Scott, Victoria; Spear, Sherrie B; Hills, Karen J

    2015-12-01

    Curriculum and course redesign are expected and intentional efforts in health professions education. For physician assistant (PA) education, ongoing program self-assessment is a required accreditation standard and may guide deliberate changes within curriculum. The purpose of this article is to describe one PA program’s approach to the redesign of 4 courses into 3 courses that span the entire didactic phase. Significant lessons learned include the importance of planning ahead, identifying key players, documenting the process as part of ongoing self-assessment, competency mapping, and being prepared to make real-time modifications and changes based on course evaluations and faculty feedback. Our approach and guiding principles to the successful redesign of the didactic courses may provide both established and new PA educational programs with useful methods to apply in their own unique curricula.

  16. Parameter estimation of an ARMA model for river flow forecasting using goal programming

    Science.gov (United States)

    Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene

    2006-11-01

    SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.

  17. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  18. Real-time approaches to the estimation of local wind velocity for a fixed-wing unmanned air vehicle

    International Nuclear Information System (INIS)

    Chan, W L; Lee, C S; Hsiao, F B

    2011-01-01

    Three real-time approaches to estimating local wind velocity for a fixed-wing unmanned air vehicle are presented in this study. All three methods work around the navigation equations with added wind components. The first approach calculates the local wind speed by substituting the ground speed and ascent rate data given by the Global Positioning System (GPS) into the navigation equations. The second and third approaches utilize the extended Kalman filter (EKF) and the unscented Kalman filter (UKF), respectively. The results show that, despite the nonlinearity of the navigation equations, the EKF performance is proven to be on a par with the UKF. A time-varying noise estimation method based on the Wiener filter is also discussed. Results are compared with the average wind speed measured on the ground. All three approaches are proven to be reliable with stated advantages and disadvantages

  19. SPATIAL SEARCH IN COMMERCIAL FISHING: A DISCRETE CHOICE DYNAMIC PROGRAMMING APPROACH

    OpenAIRE

    Smith, Martin D.; Provencher, Bill

    2003-01-01

    We specify a discrete choice dynamic programming model of commercial fishing participation and location choices. This approach allows us to examine how fishermen collect information about resource abundance and whether their behavior is forward-looking.

  20. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  1. Balancing uncertainty of context in ERP project estimation: an approach and a case study

    NARCIS (Netherlands)

    Daneva, Maia

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -

  2. Discussion of Regulatory Guide 7.10, emphasizing the graded approach for establishing QA programs

    International Nuclear Information System (INIS)

    Gordon, L.; Lake, W.H.

    1983-01-01

    To assist applicants in establishing an acceptable QA program to meet the programmatic elements of Appendix E to 10 CFR Part 71, Regulatory Guide 7.10 was developed. Regulatory Guide 7.10 is organized in three self-contained ANNEXES. Guidance applicable to designer/fabricators, to users, and users of radiographic devices are in separate annexes. QA programs for packaging to transport radioactive material are similar in regard to the various operations a licensee may be involved in. However, the appropriate QA/QC effort to verify the program elements may vary significantly. This is referred to as the graded approach. Appendix A in the guide addresses the graded approach

  3. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  4. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  5. Cost of employee assistance programs: comparison of national estimates from 1993 and 1995.

    Science.gov (United States)

    French, M T; Zarkin, G A; Bray, J W; Hartwell, T D

    1999-02-01

    The cost and financing of mental health services is gaining increasing importance with the spread of managed care and cost-cutting measures throughout the health care system. The delivery of mental health services through structured employee assistance programs (EAPs) could be undermined by revised health insurance contracts and cutbacks in employer-provided benefits at the workplace. This study uses two recently completed national surveys of EAPs to estimate the costs of providing EAP services during 1993 and 1995. EAP costs are determined by program type, worksite size, industry, and region. In addition, information on program services is reported to determine the most common types and categories of services and whether service delivery changes have occurred between 1993 and 1995. The results of this study will be useful to EAP managers, mental health administrators, and mental health services researchers who are interested in the delivery and costs of EAP services.

  6. Holistic Approach to Learning and Teaching Introductory Object-Oriented Programming

    Science.gov (United States)

    Thota, Neena; Whitfield, Richard

    2010-01-01

    This article describes a holistic approach to designing an introductory, object-oriented programming course. The design is grounded in constructivism and pedagogy of phenomenography. We use constructive alignment as the framework to align assessments, learning, and teaching with planned learning outcomes. We plan learning and teaching activities,…

  7. Intercomparisons of Prognostic, Diagnostic, and Inversion Modeling Approaches for Estimation of Net Ecosystem Exchange over the Pacific Northwest Region

    Science.gov (United States)

    Turner, D. P.; Jacobson, A. R.; Nemani, R. R.

    2013-12-01

    The recent development of large spatially-explicit datasets for multiple variables relevant to monitoring terrestrial carbon flux offers the opportunity to estimate the terrestrial land flux using several alternative, potentially complimentary, approaches. Here we developed and compared regional estimates of net ecosystem exchange (NEE) over the Pacific Northwest region of the U.S. using three approaches. In the prognostic modeling approach, the process-based Biome-BGC model was driven by distributed meteorological station data and was informed by Landsat-based coverages of forest stand age and disturbance regime. In the diagnostic modeling approach, the quasi-mechanistic CFLUX model estimated net ecosystem production (NEP) by upscaling eddy covariance flux tower observations. The model was driven by distributed climate data and MODIS FPAR (the fraction of incident PAR that is absorbed by the vegetation canopy). It was informed by coarse resolution (1 km) data about forest stand age. In both the prognostic and diagnostic modeling approaches, emissions estimates for biomass burning, harvested products, and river/stream evasion were added to model-based NEP to get NEE. The inversion model (CarbonTracker) relied on observations of atmospheric CO2 concentration to optimize prior surface carbon flux estimates. The Pacific Northwest is heterogeneous with respect to land cover and forest management, and repeated surveys of forest inventory plots support the presence of a strong regional carbon sink. The diagnostic model suggested a stronger carbon sink than the prognostic model, and a much larger sink that the inversion model. The introduction of Landsat data on disturbance history served to reduce uncertainty with respect to regional NEE in the diagnostic and prognostic modeling approaches. The FPAR data was particularly helpful in capturing the seasonality of the carbon flux using the diagnostic modeling approach. The inversion approach took advantage of a global

  8. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  9. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  10. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    Science.gov (United States)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  11. Monte Carlo next-event point flux estimation for RCP01

    International Nuclear Information System (INIS)

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-01-01

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator's efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn't needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty

  12. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    Science.gov (United States)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  13. A new approach to Ozone Depletion Potential (ODP) estimation

    Science.gov (United States)

    Portmann, R. W.; Daniel, J. S.; Yu, P.

    2017-12-01

    The Ozone Depletion Potential (ODP) is given by the time integrated global ozone loss of an ozone depleting substance (ODS) relative to a reference ODS (usually CFC-11). The ODP is used by the Montreal Protocol (and subsequent amendments) to inform policy decisions on the production of ODSs. Since the early 1990s, ODPs have usually been estimated using an approximate formulism that utilizes the lifetime and the fractional release factor of the ODS. This has the advantage that it can utilize measured concentrations of the ODSs to estimate their fractional release factors. However, there is a strong correlation between stratospheric lifetimes and fractional release factors of ODSs and that this can introduce uncertainties into ODP calculations when the terms are estimated independently. Instead, we show that the ODP is proportional to the average global ozone loss per equivalent chlorine molecule released in the stratosphere by the ODS loss process (which we call the Γ factor) and, importantly, this ratio varies only over a relatively small range ( 0.3-1.5) for ODPs with stratospheric lifetimes of 20 to more than 1,000 years. The Γ factor varies smoothly with stratospheric lifetime for ODSs with loss processes dominated by photolysis and is larger for long-lived species, while stratospheric OH loss processes produce relatively small Γs that are nearly independent of stratospheric lifetime. The fractional release approach does not accurately capture these relationships. We propose a new formulation that takes advantage of this smooth variation by parameterizing the Γ factor using ozone changes computed using the chemical climate model CESM-WACCM and the NOCAR two-dimensional model. We show that while the absolute Γ's vary between WACCM and NOCAR models, much of the difference is removed for the Γ/ΓCFC-11 ratio that is used in the ODP formula. This parameterized method simplifies the computation of ODPs while providing enhanced accuracy compared to the

  14. A novel approach for estimating sugar and alcohol concentrations in wines using refractometer and hydrometer.

    Science.gov (United States)

    Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H

    2009-03-01

    To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.

  15. DBS Programming: An Evolving Approach for Patients with Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Aparna Wagle Shukla

    2017-01-01

    Full Text Available Deep brain stimulation (DBS surgery is a well-established therapy for control of motor symptoms in Parkinson’s disease. Despite an appropriate targeting and an accurate placement of DBS lead, a thorough and efficient programming is critical for a successful clinical outcome. DBS programming is a time consuming and laborious manual process. The current approach involves use of general guidelines involving determination of the lead type, electrode configuration, impedance check, and battery check. However there are no validated and well-established programming protocols. In this review, we will discuss the current practice and the recent advances in DBS programming including the use of interleaving, fractionated current, directional steering of current, and the use of novel DBS pulses. These technological improvements are focused on achieving a more efficient control of clinical symptoms with the least possible side effects. Other promising advances include the introduction of computer guided programming which will likely impact the efficiency of programming for the clinicians and the possibility of remote Internet based programming which will improve access to DBS care for the patients.

  16. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  17. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    Directory of Open Access Journals (Sweden)

    Jan Horbowy

    Full Text Available Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR, which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  18. Conditional random slope: A new approach for estimating individual child growth velocity in epidemiological research.

    Science.gov (United States)

    Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E

    2017-09-10

    Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.

  19. Improving credibility and transparency of conservation impact evaluations through the partial identification approach.

    Science.gov (United States)

    McConnachie, Matthew M; Romero, Claudia; Ferraro, Paul J; van Wilgen, Brian W

    2016-04-01

    The fundamental challenge of evaluating the impact of conservation interventions is that researchers must estimate the difference between the outcome after an intervention occurred and what the outcome would have been without it (counterfactual). Because the counterfactual is unobservable, researchers must make an untestable assumption that some units (e.g., organisms or sites) that were not exposed to the intervention can be used as a surrogate for the counterfactual (control). The conventional approach is to make a point estimate (i.e., single number along with a confidence interval) of impact, using, for example, regression. Point estimates provide powerful conclusions, but in nonexperimental contexts they depend on strong assumptions about the counterfactual that often lack transparency and credibility. An alternative approach, called partial identification (PI), is to first estimate what the counterfactual bounds would be if the weakest possible assumptions were made. Then, one narrows the bounds by using stronger but credible assumptions based on an understanding of why units were selected for the intervention and how they might respond to it. We applied this approach and compared it with conventional approaches by estimating the impact of a conservation program that removed invasive trees in part of the Cape Floristic Region. Even when we used our largest PI impact estimate, the program's control costs were 1.4 times higher than previously estimated. PI holds promise for applications in conservation science because it encourages researchers to better understand and account for treatment selection biases; can offer insights into the plausibility of conventional point-estimate approaches; could reduce the problem of advocacy in science; might be easier for stakeholders to agree on a bounded estimate than a point estimate where impacts are contentious; and requires only basic arithmetic skills. © 2015 Society for Conservation Biology.

  20. 78 FR 18932 - Public Meeting: Unmanned Aircraft Systems Test Site Program; Privacy Approach

    Science.gov (United States)

    2013-03-28

    ... discussion about which privacy issues are raised by UAS operations and how law, public policy, and the...-0061] Public Meeting: Unmanned Aircraft Systems Test Site Program; Privacy Approach AGENCY: Federal... a public engagement session on Wednesday, April 3, 2013, on the proposed privacy policy approach for...

  1. Effects of Maternal Obesity on Fetal Programming: Molecular Approaches

    Science.gov (United States)

    Neri, Caterina; Edlow, Andrea G.

    2016-01-01

    Maternal obesity has become a worldwide epidemic. Obesity and a high-fat diet have been shown to have deleterious effects on fetal programming, predisposing offspring to adverse cardiometabolic and neurodevelopmental outcomes. Although large epidemiological studies have shown an association between maternal obesity and adverse outcomes for offspring, the underlying mechanisms remain unclear. Molecular approaches have played a key role in elucidating the mechanistic underpinnings of fetal malprogramming in the setting of maternal obesity. These approaches include, among others, characterization of epigenetic modifications, microRNA expression, the gut microbiome, the transcriptome, and evaluation of specific mRNA expression via quantitative reverse transcription polmerase chain reaction (RT-qPCR) in fetuses and offspring of obese females. This work will review the data from animal models and human fluids/cells regarding the effects of maternal obesity on fetal and offspring neurodevelopment and cardiometabolic outcomes, with a particular focus on molecular approaches. PMID:26337113

  2. An index approach to performance-based payments for water quality.

    Science.gov (United States)

    Maille, Peter; Collins, Alan R

    2012-05-30

    In this paper we describe elements of a field research project that presented farmers with economic incentives to control nitrate runoff. The approach used is novel in that payments are based on ambient water quality and water quantity produced by a watershed rather than proxies for water quality conservation. Also, payments are made based on water quality relative to a control watershed, and therefore, account for stochastic fluctuations in background nitrate levels. Finally, the program pays farmers as a group to elicit team behavior. We present our approach to modeling that allowed us to estimate prices for water and resulting payment levels. We then compare these preliminary estimates to the actual values recorded over 33 months of fieldwork. We find that our actual payments were 29% less than our preliminary estimates, due in part to the failure of our ecological model to estimate discharge accurately. Despite this shortfall, the program attracted the participation of 53% of the farmers in the watershed, and resulted in substantial nitrate abatement activity. Given this favorable response, we propose that research efforts focus on implementing field trials of group-level performance-based payments. Ideally these programs would be low risk and control for naturally occurring contamination. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    Science.gov (United States)

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  4. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    Science.gov (United States)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  5. An Implementation Research Approach to Evaluating Health Insurance Programs: Insights from India

    Directory of Open Access Journals (Sweden)

    Krishna D. Rao

    2016-05-01

    Full Text Available One of the distinguishing features of implementation research is the importance given to involve implementers in all aspects of research, and as users of research. We report on a recent implementation research effort in India, in which researchers worked together with program implementers from one of the longest serving government funded insurance schemes in India, the Rajiv Aarogyasri Scheme (RAS in the state of undivided Andhra Pradesh, that covers around 70 million people. This paper aims to both inform on the process of the collaborative research, as well as, how the nature of questions that emerged out of the collaborative exercise differed in scope from those typically asked of insurance program evaluations. Starting in 2012, and over the course of a year, staff from the Aarogyasri Health Care Trust (AHCT, and researchers held a series of meetings to identify research questions that could serve as a guide for an evaluation of the RAS. The research questions were derived from the application of a Logical Framework Approach (“log frame” to the RAS. The types of questions that emerged from this collaborative effort were compared with those seen in the published literature on evaluations of insurance programs in low- and middle-income countries (LMICs. In the published literature, 60% of the questions pertained to output/outcome of the program and the remaining 40%, relate to processes and inputs. In contrast, questions generated from the RAS participatory research process between implementers and researchers had a remarkably different distribution – 81% of questions looked at program input/processes, and 19% on outputs and outcomes. An implementation research approach can lead to a substantively different emphasis of research questions. While there are several challenges in collaborative research between implementers and researchers, an implementation research approach can lead to incorporating tacit knowledge of program implementers

  6. Gas contract portfolio management: a stochastic programming approach

    International Nuclear Information System (INIS)

    Haurie, A.; Smeers, Y.; Zaccour, G.

    1991-01-01

    This paper deals with a stochastic programming model which complements long range market simulation models generating scenarios concerning the evolution of demand and prices for gas in different market segments. Agas company has to negociate contracts with lengths going from one to twenty years. This stochastic model is designed to assess the risk associated with committing the gas production capacity of the company to these market segments. Different approaches are presented to overcome the difficulties associated with the very large size of the resulting optimization problem

  7. A Proposal for the Common Safety Approach of Space Programs

    Science.gov (United States)

    Grimard, Max

    2002-01-01

    For all applications, business and systems related to Space programs, Quality is mandatory and is a key factor for the technical as well as the economical performances. Up to now the differences of applications (launchers, manned space-flight, sciences, telecommunications, Earth observation, planetary exploration, etc.) and the difference of technical culture and background of the leading countries (USA, Russia, Europe) have generally led to different approaches in terms of standards and processes for Quality. At a time where international cooperation is quite usual for the institutional programs and globalization is the key word for the commercial business, it is considered of prime importance to aim at common standards and approaches for Quality in Space Programs. For that reason, the International Academy of Astronautics has set up a Study Group which mandate is to "Make recommendations to improve the Quality, Reliability, Efficiency, and Safety of space programmes, taking into account the overall environment in which they operate : economical constraints, harsh environments, space weather, long life, no maintenance, autonomy, international co-operation, norms and standards, certification." The paper will introduce the activities of this Study Group, describing a first list of topics which should be addressed : Through this paper it is expected to open the discussion to update/enlarge this list of topics and to call for contributors to this Study Group.

  8. A method for the estimation of the residual error in the SALP approach for fault tree analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Contini, S.

    1980-01-01

    The aim of this report is the illustration of the algorithms implemented in the SALP-MP code for the estimation of the residual error. These algorithms are of more general use, and it would be possible to implement them on all codes of the series SALP previously developed, as well as, with minor modifications, to analysis procedures based on 'top-down' approaches. At the time, combined 'top-down' - 'bottom up' procedures are being studied in order to take advantage from both approaches for further reduction of computer time and better estimation of the residual error, for which the developed algorithms are still applicable

  9. Estimation of genetic parameters for growth traits in a breeding program for rainbow trout (Oncorhynchus mykiss) in China.

    Science.gov (United States)

    Hu, G; Gu, W; Bai, Q L; Wang, B Q

    2013-04-26

    Genetic parameters and breeding values for growth traits were estimated in the first and, currently, the only family selective breeding program for rainbow trout (Oncorhynchus mykiss) in China. Genetic and phenotypic data were collected for growth traits from 75 full-sibling families with a 2-generation pedigree. Genetic parameters and breeding values for growth traits of rainbow trout were estimated using the derivative-free restricted maximum likelihood method. The goodness-of-fit of the models was tested using Akaike and Bayesian information criteria. Genetic parameters and breeding values were estimated using the best-fit model for each trait. The values for heritability estimating body weight and length ranged from 0.20 to 0.45 and from 0.27 to 0.60, respectively, and the heritability of condition factor was 0.34. Our results showed a moderate degree of heritability for growth traits in this breeding program and suggested that the genetic and phenotypic tendency of body length, body weight, and condition factor were similar. Therefore, the selection of phenotypic values based on pedigree information was also suitable in this research population.

  10. Best estimate approach for the evaluation of critical heat flux phenomenon in the boiling water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kaliatka, Tadas; Kaliatka, Algirdas; Uspuras, Eudenijus; Vaisnoras, Mindaugas [Lithuanian Energy Institute, Kaunas (Lithuania); Mochizuki, Hiroyasu; Rooijen, W.F.G. van [Fukui Univ. (Japan). Research Inst. of Nuclear Engineering

    2017-05-15

    Because of the uncertainties associated with the definition of Critical Heat Flux (CHF), the best estimate approach should be used. In this paper the application of best-estimate approach for the analysis of CHF phenomenon in the boiling water reactors is presented. At first, the nodalization of RBMK-1500, BWR-5 and ABWR fuel assemblies were developed using RELAP5 code. Using developed models the CHF and Critical Heat Flux Ratio (CHFR) for different types of reactors were evaluated. The calculation results of CHF were compared with the well-known experimental data for light water reactors. The uncertainty and sensitivity analysis of ABWR 8 x 8 fuel assembly CHFR calculation result was performed using the GRS (Germany) methodology with the SUSA tool. Finally, the values of Minimum Critical Power Ratio (MCPR) were calculated for RBMK-1500, BWR-5 and ABWR fuel assemblies. The paper demonstrate how, using the results of sensitivity analysis, to receive the MCPR values, which covers all uncertainties and remains best estimated.

  11. FGP Approach for Solving Multi-level Multi-objective Quadratic Fractional Programming Problem with Fuzzy parameters

    Directory of Open Access Journals (Sweden)

    m. s. osman

    2017-09-01

    Full Text Available In this paper, we consider fuzzy goal programming (FGP approach for solving multi-level multi-objective quadratic fractional programming (ML-MOQFP problem with fuzzy parameters in the constraints. Firstly, the concept of the ?-cut approach is applied to transform the set of fuzzy constraints into a common deterministic one. Then, the quadratic fractional objective functions in each level are transformed into quadratic objective functions based on a proposed transformation. Secondly, the FGP approach is utilized to obtain a compromise solution for the ML-MOQFP problem by minimizing the sum of the negative deviational variables. Finally, an illustrative numerical example is given to demonstrate the applicability and performance of the proposed approach.

  12. The Building Block Simulation Approach to Program Assessment: The Case of Agriculture Canada's Meat Hygiene Program, 1970-1984

    OpenAIRE

    Brinkman, George L.

    2003-01-01

    For many decades a major emphasis in public policy has been the assurance of food safety and security. Measurement of the economic returns to these programs is often difficult and challenging. In many cases the difficulty in obtaining data and the sheer complexity of the issues make the use of traditional econometric and programming approaches impractical for assessing these activities. This paper presents a summary of an innovative method for measuring benefits and costs of hard-to-assess pr...

  13. The INEL approach: Environmental Restoration Program management and implementation methodology

    International Nuclear Information System (INIS)

    1996-01-01

    The overall objectives of the INEL Environmental Restoration (ER) Program management approach are to facilitate meeting mission needs through the successful implementation of a sound, and effective project management philosophy. This paper outlines the steps taken to develop the ER program, and explains further the implementing tools and processes used to achieve what can be viewed as fundamental to a successful program. The various examples provided will demonstrate how the strategies for implementing these operating philosophies are actually present and at work throughout the program, in spite of budget drills and organizational changes within DOE and the implementing contractor. A few of the challenges and successes of the INEL Environmental Restoration Program have included: a) completion of all enforceable milestones to date, b) acceleration of enforceable milestones, c) managing funds to reduce uncosted obligations at year end by utilizing greater than 99% of FY-95 budget, d) an exemplary safety record, e) developing a strategy for partial Delisting of the INEL by the year 2000, f) actively dealing with Natural Resource Damages Assessment issues, g) the achievement of significant project cost reductions, h) and implementation of a partnering charter and application of front end quality principles

  14. Estimating Return on Investment in Translational Research: Methods and Protocols

    Science.gov (United States)

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  15. Estimating return on investment in translational research: methods and protocols.

    Science.gov (United States)

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  16. Evaluating a physician leadership development program - a mixed methods approach.

    Science.gov (United States)

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  17. A Robust Approach for Clock Offset Estimation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Kim Jang-Sub

    2010-01-01

    Full Text Available The maximum likelihood estimators (MLEs for the clock phase offset assuming a two-way message exchange mechanism between the nodes of a wireless sensor network were recently derived assuming Gaussian and exponential network delays. However, the MLE performs poorly in the presence of non-Gaussian or nonexponential network delay distributions. Currently, there is a need to develop clock synchronization algorithms that are robust to the distribution of network delays. This paper proposes a clock offset estimator based on the composite particle filter (CPF to cope with the possible asymmetries and non-Gaussianity of the network delay distributions. Also, a variant of the CPF approach based on the bootstrap sampling (BS is shown to exhibit good performance in the presence of reduced number of observations. Computer simulations illustrate that the basic CPF and its BS-based variant present superior performance than MLE under general random network delay distributions such as asymmetric Gaussian, exponential, Gamma, Weibull as well as various mixtures.

  18. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  19. Calculation of Complexity Costs – An Approach for Rationalizing a Product Program

    DEFF Research Database (Denmark)

    Hansen, Christian Lindschou; Mortensen, Niels Henrik; Hvam, Lars

    2012-01-01

    This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component invento...... of a product program. These findings represent an improved decision basis for the planning of reactive and proactive initiatives of rationalizing a product program.......This paper proposes an operational method for rationalizing a product program based on the calculation of complexity costs. The method takes its starting point in the calculation of complexity costs on a product program level. This is done throughout the value chain ranging from component...... inventories at the factory sites, all the way to the distribution of finished goods from distribution centers to the customers. The method proposes a step-wise approach including the analysis, quantification and allocation of product program complexity costs by the means of identifying of a number...

  20. Mindfulness-Based Cognitive Approach for Seniors (MBCAS): Program Development and Implementation.

    Science.gov (United States)

    Zellner Keller, Brigitte; Singh, Nirbhay N; Winton, Alan S W

    2014-01-01

    A number of cognitive interventions have been developed to enhance cognitive functioning in the growing population of the elderly. We describe the Mindfulness-Based Cognitive Approach for Seniors (MBCAS), a new training program designed especially for seniors. It was conceived in the context of self-development for seniors who wish to enhance their relationship with their inner and outer selves in order to navigate their aging process more easily and fluently. Physical and psychosocial problems related to aging, as well as some temporal issues, were taken into account in developing this program. Unlike clinically oriented mindfulness-based programs, which are generally delivered during an 8-week period, the MBCAS training program is presented over a period of 8 months. The main objectives of this program are to teach seniors to observe current experiences with nonjudgmental awareness, to identify automatic behaviors or reactions to current experiences that are potentially nonadaptive, and to enhance and reinforce positive coping with typical difficulties that they face in their daily lives. Details of the program development and initial implementation are presented, with suggestions for evaluating the program's effectiveness.

  1. Estimating impacts of plantation forestry on plant biodiversity in southern Chile-a spatially explicit modelling approach.

    Science.gov (United States)

    Braun, Andreas Christian; Koch, Barbara

    2016-10-01

    Monitoring the impacts of land-use practices is of particular importance with regard to biodiversity hotspots in developing countries. Here, conserving the high level of unique biodiversity is challenged by limited possibilities for data collection on site. Especially for such scenarios, assisting biodiversity assessments by remote sensing has proven useful. Remote sensing techniques can be applied to interpolate between biodiversity assessments taken in situ. Through this approach, estimates of biodiversity for entire landscapes can be produced, relating land-use intensity to biodiversity conditions. Such maps are a valuable basis for developing biodiversity conservation plans. Several approaches have been published so far to interpolate local biodiversity assessments in remote sensing data. In the following, a new approach is proposed. Instead of inferring biodiversity using environmental variables or the variability of spectral values, a hypothesis-based approach is applied. Empirical knowledge about biodiversity in relation to land-use is formalized and applied as ascription rules for image data. The method is exemplified for a large study site (over 67,000 km(2)) in central Chile, where forest industry heavily impacts plant diversity. The proposed approach yields a coefficient of correlation of 0.73 and produces a convincing estimate of regional biodiversity. The framework is broad enough to be applied to other study sites.

  2. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    Science.gov (United States)

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  3. Hankin and Reeves' Approach to Estimating Fish Abundance in Small Streams : Limitations and Potential Options.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2000-11-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were >0.85 and constant across sampling units (95% confidence interval coverage = 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C{bar V} = 0.27 [SE = 0.0004]) than when they did (C{bar V} = 0.19 [SE = 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates.

  4. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and potential options; TOPICAL

    International Nuclear Information System (INIS)

    Thompson, William L.

    2000-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were and gt;0.85 and constant across sampling units (95% confidence interval coverage= 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C(bar V)0.27[SE= 0.0004]) than when they did (C(bar V)= 0.19[SE= 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates

  5. The Air Force Mobile Forward Surgical Team (MFST): Using the Estimating Supplies Program to Validate Clinical Requirement

    National Research Council Canada - National Science Library

    Nix, Ralph E; Onofrio, Kathleen; Konoske, Paula J; Galarneau, Mike R; Hill, Martin

    2004-01-01

    .... The primary objective of the study was to provide the Air Force with the ability to validate clinical requirements of the MFST assemblage, with the goal of using NHRC's Estimating Supplies Program (ESP...

  6. Multi-directional program efficiency

    DEFF Research Database (Denmark)

    Asmild, Mette; Balezentis, Tomas; Hougaard, Jens Leth

    2016-01-01

    The present paper analyses both managerial and program efficiencies of Lithuanian family farms, in the tradition of Charnes et al. (Manag Sci 27(6):668–697, 1981) but with the important difference that multi-directional efficiency analysis rather than the traditional data envelopment analysis...... approach is used to estimate efficiency. This enables a consideration of input-specific efficiencies. The study shows clear differences between the efficiency scores on the different inputs as well as between the farm types of crop, livestock and mixed farms respectively. We furthermore find that crop...... farms have the highest program efficiency, but the lowest managerial efficiency and that the mixed farms have the lowest program efficiency (yet not the highest managerial efficiency)....

  7. Towards breaking the spatial resolution barriers: An optical flow and super-resolution approach for sea ice motion estimation

    Science.gov (United States)

    Petrou, Zisis I.; Xian, Yang; Tian, YingLi

    2018-04-01

    Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.

  8. A Comparison of Student Academic Performance with Traditional, Online, And Flipped Instructional Approaches in a C# Programming Course

    Directory of Open Access Journals (Sweden)

    Jason H. Sharp

    2017-08-01

    Full Text Available Aim/Purpose: Compared student academic performance on specific course requirements in a C# programming course across three instructional approaches: traditional, online, and flipped. Background: Addressed the following research question: When compared to the online and traditional instructional approaches, does the flipped instructional approach have a greater impact on student academic performance with specific course requirements in a C# programming course? Methodology: Quantitative research design conducted over eight 16-week semesters among a total of 271 participants who were undergraduate students en-rolled in a C# programming course. Data collected were grades earned from specific course requirements and were analyzed with the nonparametric Kruskal Wallis H-Test using IBM SPSS Statistics, Version 23. Contribution: Provides empirical findings related to the impact that different instructional approaches have on student academic performance in a C# programming course. Also describes implications and recommendations for instructors of programming courses regarding instructional approaches that facilitate active learning, student engagement, and self-regulation. Findings: Resulted in four statistically significant findings, indicating that the online and flipped instructional approaches had a greater impact on student academic performance than the traditional approach. Recommendations for Practitioners: Implement instructional approaches such as online, flipped, or blended which foster active learning, student engagement, and self-regulation to increase student academic performance. Recommendation for Researchers: Build upon this study and others similar to it to include factors such as gender, age, ethnicity, and previous academic history. Impact on Society: Acknowledge the growing influence of technology on society as a whole. Higher education coursework and programs are evolving to encompass more digitally-based learning contexts, thus

  9. The SR Approach: a new Estimation Method for Non-Linear and Non-Gaussian Dynamic Term Structure Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller; Christensen, Bent Jesper

    This paper suggests a new and easy approach to estimate linear and non-linear dynamic term structure models with latent factors. We impose no distributional assumptions on the factors and they may therefore be non-Gaussian. The novelty of our approach is to use many observables (yields or bonds p...

  10. Global Kalman filter approaches to estimate absolute angles of lower limb segments.

    Science.gov (United States)

    Nogueira, Samuel L; Lambrecht, Stefan; Inoue, Roberto S; Bortole, Magdo; Montagnoli, Arlindo N; Moreno, Juan C; Rocon, Eduardo; Terra, Marco H; Siqueira, Adriano A G; Pons, Jose L

    2017-05-16

    In this paper we propose the use of global Kalman filters (KFs) to estimate absolute angles of lower limb segments. Standard approaches adopt KFs to improve the performance of inertial sensors based on individual link configurations. In consequence, for a multi-body system like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link angle estimations (e.g., foot). Global KF approaches, on the other hand, correlate the collective contribution of all signals from lower limb segments observed in the state-space model through the filtering process. We present a novel global KF (matricial global KF) relying only on inertial sensor data, and validate both this KF and a previously presented global KF (Markov Jump Linear Systems, MJLS-based KF), which fuses data from inertial sensors and encoders from an exoskeleton. We furthermore compare both methods to the commonly used local KF. The results indicate that the global KFs performed significantly better than the local KF, with an average root mean square error (RMSE) of respectively 0.942° for the MJLS-based KF, 1.167° for the matrical global KF, and 1.202° for the local KFs. Including the data from the exoskeleton encoders also resulted in a significant increase in performance. The results indicate that the current practice of using KFs based on local models is suboptimal. Both the presented KF based on inertial sensor data, as well our previously presented global approach fusing inertial sensor data with data from exoskeleton encoders, were superior to local KFs. We therefore recommend to use global KFs for gait analysis and exoskeleton control.

  11. RELAP5 simulation of surge line break accident using combined and best estimate plus uncertainty approaches

    International Nuclear Information System (INIS)

    Kristof, Marian; Kliment, Tomas; Petruzzi, Alessandro; Lipka, Jozef

    2009-01-01

    Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.

  12. Program-oriented approach to resource saving issues in construction materials industry

    Directory of Open Access Journals (Sweden)

    Novikova Galina

    2017-01-01

    Full Text Available The construction as a sector of the economy is one of the largest consumers of energy resources, and the building materials industry is today one of the most energy-intensive construction industry. At the enterprises of the building materials industry the different approaches and methods are used to solve resource and energy problems. Energy saving is considered not as an complex approach in the enterprise activity, but as activity for the implementation of specific energy-saving projects, which have limitations in time and in resources. The authors suggest to use a softwareoriented approach to solving the problems of resource and energy saving. For practical application of program-oriented approach we offer to use a structuring method of the decision-making, not previously used to solve problems of resource and energy saving.

  13. An e-Learning Collaborative Filtering Approach to Suggest Problems to Solve in Programming Online Judges

    Science.gov (United States)

    Toledo, Raciel Yera; Mota, Yailé Caballero

    2014-01-01

    The paper proposes a recommender system approach to cover online judge's domains. Online judges are e-learning tools that support the automatic evaluation of programming tasks done by individual users, and for this reason they are usually used for training students in programming contest and for supporting basic programming teachings. The…

  14. Optimal Design of Measurement Programs for the Parameter Identification of Dynamic Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Brincker, Rune

    The design of measurement programs devoted to parameter identification of structural dynamic systems is considered. The design problem is formulated as an optimization problem to minimize the total expected cost that is the cost of failure and the cost of the measurement program. All...... the calculations are based on a priori knowledge and engineering judgement. One of the contribution of the approach is that the optimal number of sensors can be estimated. This is shown in a numerical example where the proposed approach is demonstrated. The example is concerned with design of a measurement program...

  15. Improvements in Spectrum's fit to program data tool.

    Science.gov (United States)

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  16. The role of efficiency estimates in regulatory price reviews: Ofgem's approach to benchmarking electricity networks

    International Nuclear Information System (INIS)

    Pollitt, Michael

    2005-01-01

    Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. (author)

  17. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    Science.gov (United States)

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    Science.gov (United States)

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  19. Indirect approach for estimation of forest degradation in non-intact dry forest

    DEFF Research Database (Denmark)

    Dons, Klaus; Bhattarai, Sushma; Meilby, Henrik

    2016-01-01

    Background Implementation of REDD+ requires measurement and monitoring of carbon emissions from forest degradation in developing countries. Dry forests cover about 40 % of the total tropical forest area, are home to large populations, and hence often display high disturbance levels....... They are susceptible to gradual but persistent degradation and monitoring needs to be low cost due to the low potential benefit from carbon accumulation per unit area. Indirect remote sensing approaches may provide estimates of subsistence wood extraction, but sampling of biomass loss produces zero-inflated continuous...... data that challenges conventional statistical approaches. We introduce the use of Tweedie Compound Poisson distributions from the exponential dispersion family with Generalized Linear Models (CPGLM) to predict biomass loss as a function of distance to nearest settlement in two forest areas in Tanzania...

  20. The integrated approach to teaching programming in secondary school

    Directory of Open Access Journals (Sweden)

    Martynyuk A.A.

    2018-02-01

    Full Text Available the article considers an integrated approach to teaching programming with the use of technologies of computer modeling and 3D-graphics, allowing to improve the quality of education. It is shown that this method will allow you to systematize knowledge, improve the level of motivation through the inclusion of relevant technologies, to develop skills of project activities, to strengthen interdisciplinary connections, and promotes professional and personal self-determination of students of secondary school.

  1. An Evaluation System for Training Programs: A Case Study Using a Four-Phase Approach

    Science.gov (United States)

    Lingham, Tony; Richley, Bonnie; Rezania, Davar

    2006-01-01

    Purpose: With the increased importance of training in organizations, creating important and meaningful programs are critical to an organization and its members. The purpose of this paper is to suggest a four-phase systematic approach to designing and evaluating training programs that promotes collaboration between organizational leaders, trainers,…

  2. Estimating the occurrence of foreign material in Advanced Gas-cooled Reactors: A Bayesian Monte Carlo approach

    International Nuclear Information System (INIS)

    Mason, Paolo

    2014-01-01

    Highlights: • The amount of a specific type of foreign material found in UK AGRs has been estimated. • The estimate is based on very few instances of detection in numerous inspections. • A Bayesian Monte Carlo approach was used. • The study supports safety case claims on coolant flow impairment. • The methodology is applicable to any inspection campaign on any plant system. - Abstract: The current occurrence of a particular sort of foreign material in eight UK Advanced Gas-cooled Reactors has been estimated by means of a parametric approach. The study includes both variability, treated in analytic fashion via the combination of standard probability distributions, and the uncertainty in the parameters of the model of choice, whose posterior distribution was inferred in Bayesian fashion by means of a Monte Carlo route consisting in the conditional acceptance of sets of model parameters drawn from a prior distribution based on engineering judgement. The model underlying the present study specifically refers to the re-loading and inspection routines of UK Advanced Gas-cooled Reactors. The approach to inference here presented, however, is of general validity and can be applied to the outcome of any inspection campaign on any plant system, and indeed to any situation in which the outcome of a stochastic process is more easily simulated than described by a probability density or mass function

  3. A Multifaceted Approach to Teamwork Assessment in an Undergraduate Business Program

    Science.gov (United States)

    Kemery, Edward R.; Stickney, Lisa T.

    2014-01-01

    We describe a multifaceted, multilevel approach to teamwork learning and assessment. It includes teamwork knowledge, peer and self-appraisal of teamwork behavior, and individual and team performance on objective tests for teaching and assessing teamwork in an undergraduate business program. At the beginning of this semester-long process, students…

  4. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    Science.gov (United States)

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  5. 7 CFR 1435.301 - Annual estimates and quarterly re-estimates.

    Science.gov (United States)

    2010-01-01

    ... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing..., estimates, and re-estimates in this subpart will use available USDA statistics and estimates of production, consumption, and stocks, taking into account, where appropriate, data supplied in reports submitted pursuant...

  6. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    Science.gov (United States)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  7. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.; Claudel, Christian G.

    2012-01-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  8. Exact solutions to traffic density estimation problems involving the Lighthill-Whitham-Richards traffic flow model using mixed integer programming

    KAUST Repository

    Canepa, Edward S.

    2012-09-01

    This article presents a new mixed integer programming formulation of the traffic density estimation problem in highways modeled by the Lighthill Whitham Richards equation. We first present an equivalent formulation of the problem using an Hamilton-Jacobi equation. Then, using a semi-analytic formula, we show that the model constraints resulting from the Hamilton-Jacobi equation result in linear constraints, albeit with unknown integers. We then pose the problem of estimating the density at the initial time given incomplete and inaccurate traffic data as a Mixed Integer Program. We then present a numerical implementation of the method using experimental flow and probe data obtained during Mobile Century experiment. © 2012 IEEE.

  9. From SCIS to PELE: Approaches to Effective Dissemination Implementation and Adaptation of Instructional Programs.

    Science.gov (United States)

    Thier, Herbert D.

    1981-01-01

    Discusses in general terms the approaches necessary for effective dissemination and implementation of an educational program in a country and then relates these approaches to the cooperative relationship between the University of California at Berkeley and the Israel Science Teaching Center's MATAL and PELE Projects. (CS)

  10. Estimation of the neuronal activation using fMRI data: An observer-based approach

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2013-06-01

    This paper deals with the estimation of the neuronal activation and some unmeasured physiological information using the Blood Oxygenation Level Dependent (BOLD) signal measured using functional Magnetic Resonance Imaging (fMRI). We propose to use an observer-based approach applied to the balloon hemodynamic model. The latter describes the relation between the neural activity and the BOLD signal. The balloon model can be expressed in a nonlinear state-space representation where the states, the parameters and the input (neuronal activation), are unknown. This study focuses only on the estimation of the hidden states and the neuronal activation. The model is first linearized around the equilibrium and an observer is applied to this linearized version. Numerical results performed on synthetic data are presented.

  11. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Science.gov (United States)

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  12. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research

    NARCIS (Netherlands)

    Golino, H.F.; Epskamp, S.

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use

  13. Organizational Wellness Program Implementation and Evaluation: A Holistic Approach to Improve the Wellbeing of Middle Managers.

    Science.gov (United States)

    Medina, Maria Del Consuelo; Calderon, Angelica; Blunk, Dan I; Mills, Brandy W; Leiner, Marie

    2018-06-01

    : Employee wellness programs can provide benefits to institutions as well as employees and their families. Despite the attempts of some organizations to implement programs that take a holistic approach to improve physical, mental, and social wellness, the most common programs are exclusively comprised of physical and nutritional components. In this study, we implemented a wellness program intervention, including training using a holistic approach to improve the wellbeing of middle managers in several multinational organizations. We included control and experimental groups to measure wellness and teamwork with two repeated measures. Our results indicated that employees receiving the intervention had improved measures of wellness and teamwork. A positive relationship was found between wellness and teamwork in the experimental group when compared with the control group. Taken together, the data suggest that implementation of these programs would provide valuable outcomes for both employees and organizations.

  14. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chao [FuZhou University, FuZhou (China); Vu, Quoc Dong; Li, Pu [Ilmenau University of Technology, Ilmenau (Germany)

    2013-02-15

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach.

  15. A quasi-sequential parameter estimation for nonlinear dynamic systems based on multiple data profiles

    International Nuclear Information System (INIS)

    Zhao, Chao; Vu, Quoc Dong; Li, Pu

    2013-01-01

    A three-stage computation framework for solving parameter estimation problems for dynamic systems with multiple data profiles is developed. The dynamic parameter estimation problem is transformed into a nonlinear programming (NLP) problem by using collocation on finite elements. The model parameters to be estimated are treated in the upper stage by solving an NLP problem. The middle stage consists of multiple NLP problems nested in the upper stage, representing the data reconciliation step for each data profile. We use the quasi-sequential dynamic optimization approach to solve these problems. In the lower stage, the state variables and their gradients are evaluated through ntegrating the model equations. Since the second-order derivatives are not required in the computation framework this proposed method will be efficient for solving nonlinear dynamic parameter estimation problems. The computational results obtained on a parameter estimation problem for two CSTR models demonstrate the effectiveness of the proposed approach

  16. Dynamic Programming Approach for Construction of Association Rule Systems

    KAUST Repository

    Alsolami, Fawaz

    2016-11-18

    In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.

  17. Dynamic Programming Approach for Construction of Association Rule Systems

    KAUST Repository

    Alsolami, Fawaz; Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2016-01-01

    In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.

  18. Estimate of the area occupied by reforestation programs in Rio de Janeiro state

    Directory of Open Access Journals (Sweden)

    Hugo Barbosa Amorim

    2012-03-01

    Full Text Available This study was based on a preliminary survey and inventory of existing reforestation programs in Rio de Janeiro state, through geoprocessing techniques and collection of field data. The reforested area was found to occupy 18,426.96 ha, which amounts to 0.42% of the territory of the state. Much of reforestation programs consists of eucalyptus (98%, followed by pine plantations (0.8%, and the remainder is distributed among 10 other species. The Médio Paraíba region was found to contribute the most to the reforested area of the state (46.6%. The estimated volume of eucalyptus timber was nearly two million cubic meters. This study helped crystallize the ongoing perception among those militating in the forestry sector of Rio de Janeiro state that the planted area and stock of reforestation timber is still incipient in the state.

  19. Estimating Origin-Destination Matrices Using AN Efficient Moth Flame-Based Spatial Clustering Approach

    Science.gov (United States)

    Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali

    2017-09-01

    Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  20. ESTIMATING ORIGIN-DESTINATION MATRICES USING AN EFFICIENT MOTH FLAME-BASED SPATIAL CLUSTERING APPROACH

    Directory of Open Access Journals (Sweden)

    A. A. Heidari

    2017-09-01

    Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.

  1. BATEMANATER: a computer program to estimate and bootstrap mating system variables based on Bateman's principles.

    Science.gov (United States)

    Jones, Adam G

    2015-11-01

    Bateman's principles continue to play a major role in the characterization of genetic mating systems in natural populations. The modern manifestations of Bateman's ideas include the opportunity for sexual selection (i.e. I(s) - the variance in relative mating success), the opportunity for selection (i.e. I - the variance in relative reproductive success) and the Bateman gradient (i.e. β(ss) - the slope of the least-squares regression of reproductive success on mating success). These variables serve as the foundation for one convenient approach for the quantification of mating systems. However, their estimation presents at least two challenges, which I address here with a new Windows-based computer software package called BATEMANATER. The first challenge is that confidence intervals for these variables are not easy to calculate. BATEMANATER solves this problem using a bootstrapping approach. The second, more serious, problem is that direct estimates of mating system variables from open populations will typically be biased if some potential progeny or adults are missing from the analysed sample. BATEMANATER addresses this problem using a maximum-likelihood approach to estimate mating system variables from incompletely sampled breeding populations. The current version of BATEMANATER addresses the problem for systems in which progeny can be collected in groups of half- or full-siblings, as would occur when eggs are laid in discrete masses or offspring occur in pregnant females. BATEMANATER has a user-friendly graphical interface and thus represents a new, convenient tool for the characterization and comparison of genetic mating systems. © 2015 John Wiley & Sons Ltd.

  2. Scheduling of head-dependent cascaded hydro systems: Mixed-integer quadratic programming approach

    International Nuclear Information System (INIS)

    Catalao, J.P.S.; Pousinho, H.M.I.; Mendes, V.M.F.

    2010-01-01

    This paper is on the problem of short-term hydro scheduling, particularly concerning head-dependent cascaded hydro systems. We propose a novel mixed-integer quadratic programming approach, considering not only head-dependency, but also discontinuous operating regions and discharge ramping constraints. Thus, an enhanced short-term hydro scheduling is provided due to the more realistic modeling presented in this paper. Numerical results from two case studies, based on Portuguese cascaded hydro systems, illustrate the proficiency of the proposed approach.

  3. ONE OF APPROACHES TO THE ESTIMATION OF FIRMNESS OF TRAFFIC CONTROL SYSTEMS OF MOTOR TRANSPORT

    Directory of Open Access Journals (Sweden)

    D. Labenko

    2009-01-01

    Full Text Available The control system of locomotive objects and its description is considered. One of approaches concerning the basic index of control systems estimation – the probability of system’s functioning with the set quality in conditions of various influence on its elements is offered.

  4. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  5. Using Multiple and Logistic Regression to Estimate the Median WillCost and Probability of Cost and Schedule Overrun for Program Managers

    Science.gov (United States)

    2017-03-23

    Logistic Regression to Estimate the Median Will-Cost and Probability of Cost and Schedule Overrun for Program Managers Ryan C. Trudelle, B.S...not the other. We are able to give logistic regression models to program managers that identify several program characteristics for either...considered acceptable. We recommend the use of our logistic models as a tool to manage a portfolio of programs in order to gain potential elusive

  6. Sensitivity of Hurst parameter estimation to periodic signals in time series and filtering approaches

    Science.gov (United States)

    Marković, D.; Koch, M.

    2005-09-01

    The influence of the periodic signals in time series on the Hurst parameter estimate is investigated with temporal, spectral and time-scale methods. The Hurst parameter estimates of the simulated periodic time series with a white noise background show a high sensitivity on the signal to noise ratio and for some methods, also on the data length used. The analysis is then carried on to the investigation of extreme monthly river flows of the Elbe River (Dresden) and of the Rhine River (Kaub). Effects of removing the periodic components employing different filtering approaches are discussed and it is shown that such procedures are a prerequisite for an unbiased estimation of H. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.

  7. Toward a global space exploration program: A stepping stone approach

    Science.gov (United States)

    Ehrenfreund, Pascale; McKay, Chris; Rummel, John D.; Foing, Bernard H.; Neal, Clive R.; Masson-Zwaan, Tanja; Ansdell, Megan; Peter, Nicolas; Zarnecki, John; Mackwell, Steve; Perino, Maria Antionetta; Billings, Linda; Mankins, John; Race, Margaret

    2012-01-01

    In response to the growing importance of space exploration in future planning, the Committee on Space Research (COSPAR) Panel on Exploration (PEX) was chartered to provide independent scientific advice to support the development of exploration programs and to safeguard the potential scientific assets of solar system objects. In this report, PEX elaborates a stepwise approach to achieve a new level of space cooperation that can help develop world-wide capabilities in space science and exploration and support a transition that will lead to a global space exploration program. The proposed stepping stones are intended to transcend cross-cultural barriers, leading to the development of technical interfaces and shared legal frameworks and fostering coordination and cooperation on a broad front. Input for this report was drawn from expertise provided by COSPAR Associates within the international community and via the contacts they maintain in various scientific entities. The report provides a summary and synthesis of science roadmaps and recommendations for planetary exploration produced by many national and international working groups, aiming to encourage and exploit synergies among similar programs. While science and technology represent the core and, often, the drivers for space exploration, several other disciplines and their stakeholders (Earth science, space law, and others) should be more robustly interlinked and involved than they have been to date. The report argues that a shared vision is crucial to this linkage, and to providing a direction that enables new countries and stakeholders to join and engage in the overall space exploration effort. Building a basic space technology capacity within a wider range of countries, ensuring new actors in space act responsibly, and increasing public awareness and engagement are concrete steps that can provide a broader interest in space exploration, worldwide, and build a solid basis for program sustainability. By engaging

  8. A Qualitative Approach to Examining Knowledge Sharing in Iran Tax Administration Reform Program

    Directory of Open Access Journals (Sweden)

    Mehdi Shami Zanjanie

    2012-02-01

    Full Text Available The paper aims to examine knowledge sharing infrastructure of "Iran Tax Administration Reform Program". The qualitative approach by using case study method was applied in this research. In order to meet the research goal, four infrastructural dimensions of knowledge sharing were studied: leadership & strategy, culture, structure, and information technology. To the authors’ knowledge, this was maybe the first paper which examined knowledge sharing infrastructure in programs environment

  9. Chemical Exposure Assessment Program at Los Alamos National Laboratory: A risk based approach

    International Nuclear Information System (INIS)

    Stephenson, D.J.

    1996-01-01

    The University of California Contract And DOE Order 5480.10 require that Los Alamos National Laboratory (LANL) perform health hazard assessments/inventories of all employee workplaces. In response to this LANL has developed the Chemical Exposure Assessment Program. This program provides a systematic risk-based approach to anticipation, recognition, evaluation and control of chemical workplace exposures. Program implementation focuses resources on exposures with the highest risks for causing adverse health effects. Implementation guidance includes procedures for basic characterization, qualitative risk assessment, quantitative validation, and recommendations and reevaluation. Each component of the program is described. It is shown how a systematic method of assessment improves documentation, retrieval, and use of generated exposure information

  10. Estimating the financial resources needed for local public health departments in Minnesota: a multimethod approach.

    Science.gov (United States)

    Riley, William; Briggs, Jill; McCullough, Mac

    2011-01-01

    This study presents a model for determining total funding needed for individual local health departments. The aim is to determine the financial resources needed to provide services for statewide local public health departments in Minnesota based on a gaps analysis done to estimate the funding needs. We used a multimethod analysis consisting of 3 approaches to estimate gaps in local public health funding consisting of (1) interviews of selected local public health leaders, (2) a Delphi panel, and (3) a Nominal Group Technique. On the basis of these 3 approaches, a consensus estimate of funding gaps was generated for statewide projections. The study includes an analysis of cost, performance, and outcomes from 2005 to 2007 for all 87 local governmental health departments in Minnesota. For each of the methods, we selected a panel to represent a profile of Minnesota health departments. The 2 main outcome measures were local-level gaps in financial resources and total resources needed to provide public health services at the local level. The total public health expenditure in Minnesota for local governmental public health departments was $302 million in 2007 ($58.92 per person). The consensus estimate of the financial gaps in local public health departments indicates that an additional $32.5 million (a 10.7% increase or $6.32 per person) is needed to adequately serve public health needs in the local communities. It is possible to make informed estimates of funding gaps for public health activities on the basis of a combination of quantitative methods. There is a wide variation in public health expenditure at the local levels, and methods are needed to establish minimum baseline expenditure levels to adequately treat a population. The gaps analysis can be used by stakeholders to inform policy makers of the need for improved funding of the public health system.

  11. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    Science.gov (United States)

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  12. Estimation of macroscopic elastic characteristics for hierarchical anisotropic solids based on probabilistic approach

    Science.gov (United States)

    Smolina, Irina Yu.

    2015-10-01

    Mechanical properties of a cable are of great importance in design and strength calculation of flexible cables. The problem of determination of elastic properties and rigidity characteristics of a cable modeled by anisotropic helical elastic rod is considered. These characteristics are calculated indirectly by means of the parameters received from statistical processing of experimental data. These parameters are considered as random quantities. With taking into account probable nature of these parameters the formulas for estimation of the macroscopic elastic moduli of a cable are obtained. The calculating expressions for macroscopic flexural rigidity, shear rigidity and torsion rigidity using the macroscopic elastic characteristics obtained before are presented. Statistical estimations of the rigidity characteristics of some cable grades are adduced. A comparison with those characteristics received on the basis of deterministic approach is given.

  13. The Impact of Different Teaching Approaches and Languages on Student Learning of Introductory Programming Concepts

    Science.gov (United States)

    Kunkle, Wanda M.

    2010-01-01

    Many students experience difficulties learning to program. They find learning to program in the object-oriented paradigm particularly challenging. As a result, computing educators have tried a variety of instructional methods to assist beginning programmers. These include developing approaches geared specifically toward novices and experimenting…

  14. A Bayesian approach to estimating variance components within a multivariate generalizability theory framework.

    Science.gov (United States)

    Jiang, Zhehan; Skorupski, William

    2017-12-12

    In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.

  15. Spatial Programming for Industrial Robots through Task Demonstration

    Directory of Open Access Journals (Sweden)

    Jens Lambrecht

    2013-05-01

    Full Text Available Abstract We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks.

  16. Estimating the cost of improving quality in electricity distribution: A parametric distance function approach

    International Nuclear Information System (INIS)

    Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana

    2013-01-01

    The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.

  17. Scheduling of head-dependent cascaded hydro systems: Mixed-integer quadratic programming approach

    Energy Technology Data Exchange (ETDEWEB)

    Catalao, J.P.S.; Pousinho, H.M.I. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Mendes, V.M.F. [Department of Electrical Engineering and Automation, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emidio Navarro, 1950-062 Lisbon (Portugal)

    2010-03-15

    This paper is on the problem of short-term hydro scheduling, particularly concerning head-dependent cascaded hydro systems. We propose a novel mixed-integer quadratic programming approach, considering not only head-dependency, but also discontinuous operating regions and discharge ramping constraints. Thus, an enhanced short-term hydro scheduling is provided due to the more realistic modeling presented in this paper. Numerical results from two case studies, based on Portuguese cascaded hydro systems, illustrate the proficiency of the proposed approach. (author)

  18. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    Science.gov (United States)

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  19. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    Science.gov (United States)

    Daniel J. Hayes; David P. Turner; Graham Stinson; A. David Mcguire; Yaxing Wei; Tristram O. West; Linda S. Heath; Bernardus Dejong; Brian G. McConkey; Richard A. Birdsey; Werner A. Kurz; Andrew R. Jacobson; Deborah N. Huntzinger; Yude Pan; W. Mac Post; Robert B. Cook

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000-2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2,...

  20. A new approach on seismic mortality estimations based on average population density

    Science.gov (United States)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.