WorldWideScience

Sample records for programs estimation approach

  1. Value drivers: an approach for estimating health and disease management program savings.

    Science.gov (United States)

    Phillips, V L; Becker, Edmund R; Howard, David H

    2013-12-01

    Health and disease management (HDM) programs have faced challenges in documenting savings related to their implementation. The objective of this eliminate study was to describe OptumHealth's (Optum) methods for estimating anticipated savings from HDM programs using Value Drivers. Optum's general methodology was reviewed, along with details of 5 high-use Value Drivers. The results showed that the Value Driver approach offers an innovative method for estimating savings associated with HDM programs. The authors demonstrated how real-time savings can be estimated for 5 Value Drivers commonly used in HDM programs: (1) use of beta-blockers in treatment of heart disease, (2) discharge planning for high-risk patients, (3) decision support related to chronic low back pain, (4) obesity management, and (5) securing transportation for primary care. The validity of savings estimates is dependent on the type of evidence used to gauge the intervention effect, generating changes in utilization and, ultimately, costs. The savings estimates derived from the Value Driver method are generally reasonable to conservative and provide a valuable framework for estimating financial impacts from evidence-based interventions.

  2. A dynamic programming approach to missing data estimation using neural networks

    CSIR Research Space (South Africa)

    Nelwamondo, FV

    2013-01-01

    Full Text Available method where dynamic programming is not used. This paper also suggests a different way of formulating a missing data problem such that the dynamic programming is applicable to estimate the missing data....

  3. Algorithms and programs of dynamic mixture estimation unified approach to different types of components

    CERN Document Server

    Nagy, Ivan

    2017-01-01

    This book provides a general theoretical background for constructing the recursive Bayesian estimation algorithms for mixture models. It collects the recursive algorithms for estimating dynamic mixtures of various distributions and brings them in the unified form, providing a scheme for constructing the estimation algorithm for a mixture of components modeled by distributions with reproducible statistics. It offers the recursive estimation of dynamic mixtures, which are free of iterative processes and close to analytical solutions as much as possible. In addition, these methods can be used online and simultaneously perform learning, which improves their efficiency during estimation. The book includes detailed program codes for solving the presented theoretical tasks. Codes are implemented in the open source platform for engineering computations. The program codes given serve to illustrate the theory and demonstrate the work of the included algorithms.

  4. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  5. A dynamic programming approach for quickly estimating large network-based MEV models

    DEFF Research Database (Denmark)

    Mai, Tien; Frejinger, Emma; Fosgerau, Mogens

    2017-01-01

    We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....

  6. A conceptual approach to the estimation of societal willingness-to-pay for nuclear safety programs

    International Nuclear Information System (INIS)

    Pandey, M.D.; Nathwani, J.S.

    2003-01-01

    The design, refurbishment and future decommissioning of nuclear reactors are crucially concerned with reducing the risk of radiation exposure that can result in adverse health effects and potential loss of life. To address this concern, large financial investments have been made to ensure safety of operating nuclear power plants worldwide. The efficacy of the expenditures incurred to provide safety must be judged against the safety benefit to be gained from such investments. We have developed an approach that provides a defendable basis for making that judgement. If the costs of risk reduction are disproportionate to the safety benefits derived, then the expenditures are not optimal; in essence the societal resources are being diverted away from other critical areas such as health care, education and social services that also enhance the quality of life. Thus, the allocation of society's resources devoted to nuclear safety must be continually appraised in light of competing needs, because there is a limit on the resources that any society can devote to extend life. The purpose of the paper is to present a simple and methodical approach to assessing the benefits of nuclear safety programs and regulations. The paper presents the Life-Quality Index (LQI) as a tool for the assessment of risk reduction initiatives that would support the public interest and enhance both safety and the quality of life. The LQI is formulated as a utility function consistent with the principles of rational decision analysis. The LQI is applied to quantify the societal willingness-to-pay (SWTP) for safety measures enacted to reduce of the risk of potential exposures to ionising radiation. The proposed approach provides essential support to help improve the cost-benefit analysis of engineering safety programs and safety regulations.

  7. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  8. Review of Evaluation, Measurement and Verification Approaches Used to Estimate the Load Impacts and Effectiveness of Energy Efficiency Programs

    Energy Technology Data Exchange (ETDEWEB)

    Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill; Goldman, Charles A.; Schiller, Steven R.

    2010-04-14

    Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase from $3.1 billion in 2008 to $7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy

  9. A linear programming approach for estimating the structure of a sparse linear genetic network from transcript profiling data

    Directory of Open Access Journals (Sweden)

    Chandra Nagasuma R

    2009-02-01

    Full Text Available Abstract Background A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN from transcript profiling data. Results The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting problem and solved finally by formulating a Linear Program (LP. A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known

  10. Gene expression programming approach for the estimation of moisture ratio in herbal plants drying with vacuum heat pump dryer

    Science.gov (United States)

    Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan

    2017-07-01

    The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.

  11. Sampling-based approaches to improve estimation of mortality among patient dropouts: experience from a large PEPFAR-funded program in Western Kenya.

    Directory of Open Access Journals (Sweden)

    Constantin T Yiannoutsos

    Full Text Available Monitoring and evaluation (M&E of HIV care and treatment programs is impacted by losses to follow-up (LTFU in the patient population. The severity of this effect is undeniable but its extent unknown. Tracing all lost patients addresses this but census methods are not feasible in programs involving rapid scale-up of HIV treatment in the developing world. Sampling-based approaches and statistical adjustment are the only scaleable methods permitting accurate estimation of M&E indices.In a large antiretroviral therapy (ART program in western Kenya, we assessed the impact of LTFU on estimating patient mortality among 8,977 adult clients of whom, 3,624 were LTFU. Overall, dropouts were more likely male (36.8% versus 33.7%; p = 0.003, and younger than non-dropouts (35.3 versus 35.7 years old; p = 0.020, with lower median CD4 count at enrollment (160 versus 189 cells/ml; p<0.001 and WHO stage 3-4 disease (47.5% versus 41.1%; p<0.001. Urban clinic clients were 75.0% of non-dropouts but 70.3% of dropouts (p<0.001. Of the 3,624 dropouts, 1,143 were sought and 621 had their vital status ascertained. Statistical techniques were used to adjust mortality estimates based on information obtained from located LTFU patients. Observed mortality estimates one year after enrollment were 1.7% (95% CI 1.3%-2.0%, revised to 2.8% (2.3%-3.1% when deaths discovered through outreach were added and adjusted to 9.2% (7.8%-10.6% and 9.9% (8.4%-11.5% through statistical modeling depending on the method used. The estimates 12 months after ART initiation were 1.7% (1.3%-2.2%, 3.4% (2.9%-4.0%, 10.5% (8.7%-12.3% and 10.7% (8.9%-12.6% respectively. CONCLUSIONS/SIGNIFICANCE ABSTRACT: Assessment of the impact of LTFU is critical in program M&E as estimated mortality based on passive monitoring may underestimate true mortality by up to 80%. This bias can be ameliorated by tracing a sample of dropouts and statistically adjust the mortality estimates to properly evaluate and guide large

  12. Approaches to estimating decommissioning costs

    International Nuclear Information System (INIS)

    Smith, R.I.

    1990-07-01

    The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs

  13. Systematic Approach for Decommissioning Planning and Estimating

    International Nuclear Information System (INIS)

    Dam, A. S.

    2002-01-01

    Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises

  14. Cost-estimating relationships for space programs

    Science.gov (United States)

    Mandell, Humboldt C., Jr.

    1992-01-01

    Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.

  15. Indirect estimators in US federal programs

    CERN Document Server

    1996-01-01

    In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.

  16. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  17. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  18. Decommissioning Cost Estimating -The ''Price'' Approach

    International Nuclear Information System (INIS)

    Manning, R.; Gilmour, J.

    2002-01-01

    Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs

  19. A programming approach to computability

    CERN Document Server

    Kfoury, A J; Arbib, Michael A

    1982-01-01

    Computability theory is at the heart of theoretical computer science. Yet, ironically, many of its basic results were discovered by mathematical logicians prior to the development of the first stored-program computer. As a result, many texts on computability theory strike today's computer science students as far removed from their concerns. To remedy this, we base our approach to computability on the language of while-programs, a lean subset of PASCAL, and postpone consideration of such classic models as Turing machines, string-rewriting systems, and p. -recursive functions till the final chapter. Moreover, we balance the presentation of un solvability results such as the unsolvability of the Halting Problem with a presentation of the positive results of modern programming methodology, including the use of proof rules, and the denotational semantics of programs. Computer science seeks to provide a scientific basis for the study of information processing, the solution of problems by algorithms, and the design ...

  20. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  1. Multi-pitch Estimation using Semidefinite Programming

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Vandenberghe, Lieven

    2017-01-01

    assuming a Nyquist sampled signal by adding an additional semidefinite constraint. We show that the proposed estimator has superior performance compared to state- of-the-art methods for separating two closely spaced fundamentals and approximately achieves the asymptotic Cramér-Rao lower bound.......Multi-pitch estimation concerns the problem of estimating the fundamental frequencies (pitches) and amplitudes/phases of multiple superimposed harmonic signals with application in music, speech, vibration analysis etc. In this paper we formulate a complex-valued multi-pitch estimator via...... a semidefinite programming representation of an atomic decomposition over a continuous dictionary of complex exponentials and extend this to real-valued data via a real semidefinite pro-ram with the same dimensions (i.e. half the size). We further impose a continuous frequency constraint naturally occurring from...

  2. On Algebraic Approach for MSD Parametric Estimation

    OpenAIRE

    Oueslati , Marouene; Thiery , Stéphane; Gibaru , Olivier; Béarée , Richard; Moraru , George

    2011-01-01

    This article address the identification problem of the natural frequency and the damping ratio of a second order continuous system where the input is a sinusoidal signal. An algebra based approach for identifying parameters of a Mass Spring Damper (MSD) system is proposed and compared to the Kalman-Bucy filter. The proposed estimator uses the algebraic parametric method in the frequency domain yielding exact formula, when placed in the time domain to identify the unknown parameters. We focus ...

  3. Bioinspired Computational Approach to Missing Value Estimation

    Directory of Open Access Journals (Sweden)

    Israel Edem Agbehadji

    2018-01-01

    Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.

  4. Fault estimation - A standard problem approach

    DEFF Research Database (Denmark)

    Stoustrup, J.; Niemann, Hans Henrik

    2002-01-01

    This paper presents a range of optimization based approaches to fault diagnosis. A variety of fault diagnosis problems are reformulated in the so-called standard problem set-up introduced in the literature on robust control. Once the standard problem formulations are given, the fault diagnosis...... problems can be solved by standard optimization techniques. The proposed methods include (1) fault diagnosis (fault estimation, (FE)) for systems with model uncertainties; FE for systems with parametric faults, and FE for a class of nonlinear systems. Copyright...

  5. Implementing corporate wellness programs: a business approach to program planning.

    Science.gov (United States)

    Helmer, D C; Dunn, L M; Eaton, K; Macedonio, C; Lubritz, L

    1995-11-01

    1. Support of key decision makers is critical to the successful implementation of a corporate wellness program. Therefore, the program implementation plan must be communicated in a format and language readily understood by business people. 2. A business approach to corporate wellness program planning provides a standardized way to communicate the implementation plan. 3. A business approach incorporates the program planning components in a format that ranges from general to specific. This approach allows for flexibility and responsiveness to changes in program planning. 4. Components of the business approach are the executive summary, purpose, background, ground rules, approach, requirements, scope of work, schedule, and financials.

  6. Peak flood estimation using gene expression programming

    Science.gov (United States)

    Zorn, Conrad R.; Shamseldin, Asaad Y.

    2015-12-01

    As a case study for the Auckland Region of New Zealand, this paper investigates the potential use of gene-expression programming (GEP) in predicting specific return period events in comparison to the established and widely used Regional Flood Estimation (RFE) method. Initially calibrated to 14 gauged sites, the GEP derived model was further validated to 10 and 100 year flood events with a relative errors of 29% and 18%, respectively. This is compared to the RFE method providing 48% and 44% errors for the same flood events. While the effectiveness of GEP in predicting specific return period events is made apparent, it is argued that the derived equations should be used in conjunction with those existing methodologies rather than as a replacement.

  7. A Unified Approach to Modeling and Programming

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann; Møller-Pedersen, Birger

    2010-01-01

    of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...

  8. Site characterization: a spatial estimation approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1980-10-01

    In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly

  9. Estimated emission reductions from California's enhanced Smog Check program.

    Science.gov (United States)

    Singer, Brett C; Wenzel, Thomas P

    2003-06-01

    The U.S. Environmental Protection Agency requires that states evaluate the effectiveness of their vehicle emissions inspection and maintenance (I/M) programs. This study demonstrates an evaluation approach that estimates mass emission reductions over time and includes the effect of I/M on vehicle deterioration. It includes a quantitative assessment of benefits from pre-inspection maintenance and repairs and accounts for the selection bias effect that occurs when intermittent high emitters are tested. We report estimates of one-cycle emission benefits of California's Enhanced Smog Check program, ca. 1999. Program benefits equivalent to metric tons per day of prevented emissions were calculated with a "bottom-up" approach that combined average per vehicle reductions in mass emission rates (g/gal) with average per vehicle activity, resolved by model year. Accelerated simulation mode test data from the statewide vehicle information database (VID) and from roadside Smog Check testing were used to determine 2-yr emission profiles of vehicles passing through Smog Check and infer emission profiles that would occur without Smog Check. The number of vehicles participating in Smog Check was also determined from the VID. We estimate that in 1999 Smog Check reduced tailpipe emissions of HC, CO, and NO(x) by 97, 1690, and 81 t/d, respectively. These correspond to 26, 34, and 14% of the HC, CO, and NO(x) that would have been emitted by vehicles in the absence of Smog Check. These estimates are highly sensitive to assumptions about vehicle deterioration in the absence of Smog Check. Considering the estimated uncertainty in these assumptions yields a range for calculated benefits: 46-128 t/d of HC, 860-2200 t/d of CO, and 60-91 t/d of NO(x). Repair of vehicles that failed an initial, official Smog Check appears to be the most important mechanism of emission reductions, but pre-inspection maintenance and repair also contributed substantially. Benefits from removal of nonpassing

  10. Multigene Genetic Programming for Estimation of Elastic Modulus of Concrete

    Directory of Open Access Journals (Sweden)

    Alireza Mohammadi Bayazidi

    2014-01-01

    Full Text Available This paper presents a new multigene genetic programming (MGGP approach for estimation of elastic modulus of concrete. The MGGP technique models the elastic modulus behavior by integrating the capabilities of standard genetic programming and classical regression. The main aim is to derive precise relationships between the tangent elastic moduli of normal and high strength concrete and the corresponding compressive strength values. Another important contribution of this study is to develop a generalized prediction model for the elastic moduli of both normal and high strength concrete. Numerous concrete compressive strength test results are obtained from the literature to develop the models. A comprehensive comparative study is conducted to verify the performance of the models. The proposed models perform superior to the existing traditional models, as well as those derived using other powerful soft computing tools.

  11. A new approach to estimate Angstrom coefficients

    International Nuclear Information System (INIS)

    Abdel Wahab, M.

    1991-09-01

    A simple quadratic equation to estimate global solar radiation with coefficients depending on some physical atmospheric parameters is presented. The importance of the second order and sensitivity to some climatic variations is discussed. (author). 8 refs, 4 figs, 2 tabs

  12. Estimating Supplies Program (ESP), Version 1.00, User's Guide

    National Research Council Canada - National Science Library

    Tropeano, Anne

    2000-01-01

    The Estimating Supplies Program (ESP) is an easy to use Windows(TM)-based software program for military medical providers, planners, and trainers that calculates the amount of supplies and equipment required to treat a patient stream...

  13. Appendix C: Biomass Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  14. A linear programming approach for placement of applicants to academic programs

    OpenAIRE

    Kassa, Biniyam Asmare

    2013-01-01

    This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the ...

  15. UA criterion for NPP safety estimation SALP program

    International Nuclear Information System (INIS)

    Gorynina, L.V.; Tishchenko, V.A.

    1992-01-01

    Adopted by NRC program SALR is considered. The program is intended for acquisition and estimation of data on the activities of forms having licences for NPP operation and (or) construction. The criteria for estimation and the mechanism for determination of the rating of the firm activity quality are discussed

  16. BWR level estimation using Kalman Filtering approach

    International Nuclear Information System (INIS)

    Garner, G.; Divakaruni, S.M.; Meyer, J.E.

    1986-01-01

    Work is in progress on development of a system for Boiling Water Reactor (BWR) vessel level validation and failure detection. The levels validated include the liquid level both inside and outside the core shroud. This work is a major part of a larger effort to develop a complete system for BWR signal validation. The demonstration plant is the Oyster Creek BWR. Liquid level inside the core shroud is not directly measured during full power operation. This level must be validated using measurements of other quantities and analytic models. Given the available sensors, analytic models for level that are based on mass and energy balances can contain open integrators. When such a model is driven by noisy measurements, the model predicted level will deviate from the true level over time. To validate the level properly and to avoid false alarms, the open integrator must be stabilized. In addition, plant parameters will change slowly with time. The respective model must either account for these plant changes or be insensitive to them to avoid false alarms and maintain sensitivity to true failures of level instrumentation. Problems are addressed here by combining the extended Kalman Filter and Parity Space Decision/Estimator. The open integrator is stabilized by integrating from the validated estimate at the beginning of each sampling interval, rather than from the model predicted value. The model is adapted to slow plant/sensor changes by updating model parameters on-line

  17. Cost estimate for a proposed GDF Suez LNG testing program

    Energy Technology Data Exchange (ETDEWEB)

    Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.; Luketa, Anay Josephine; Nissen, Mark R.; Lopez, Carlos; Vermillion, Nancy; Hightower, Marion Michael

    2014-02-01

    At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire, and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.

  18. Comparing approaches to generic programming in Haskell

    NARCIS (Netherlands)

    Hinze, R.; Jeuring, J.T.; Löh, A.

    2006-01-01

    The last decade has seen a number of approaches to generic programming: PolyP, Functorial ML, `Scrap Your Boilerplate', Generic Haskell, `Generics for the Masses', etc. The approaches vary in sophistication and target audience: some propose full-blown pro- gramming languages, some suggest

  19. Comparing approaches to generic programming in Haskell

    NARCIS (Netherlands)

    Hinze, R.; Jeuring, J.T.; Löh, A.

    2006-01-01

    The last decade has seen a number of approaches to data- type-generic programming: PolyP, Functorial ML, `Scrap Your Boiler- plate', Generic Haskell, `Generics for the Masses', etc. The approaches vary in sophistication and target audience: some propose full-blown pro- gramming languages, some

  20. A Novel Approach for Solving Semidefinite Programs

    Directory of Open Access Journals (Sweden)

    Hong-Wei Jiao

    2014-01-01

    Full Text Available A novel linearizing alternating direction augmented Lagrangian approach is proposed for effectively solving semidefinite programs (SDP. For every iteration, by fixing the other variables, the proposed approach alternatively optimizes the dual variables and the dual slack variables; then the primal variables, that is, Lagrange multipliers, are updated. In addition, the proposed approach renews all the variables in closed forms without solving any system of linear equations. Global convergence of the proposed approach is proved under mild conditions, and two numerical problems are given to demonstrate the effectiveness of the presented approach.

  1. Approaches to estimating humification indicators for peat

    Directory of Open Access Journals (Sweden)

    M. Klavins

    2008-10-01

    Full Text Available Degree of decomposition is an important property of the organic matter in soils and other deposits which contain fossil carbon. It describes the intensity of transformation, or the humification degree (HD, of the original living organic matter. In this article, approaches to the determination of HD are thoroughly described and 14C dated peat columns extracted from several bogs in Latvia are investigated and compared. A new humification indicator is suggested, namely the quantity of humic substances as a fraction of the total amount of organic matter in the peat.

  2. A reliability program approach to operational safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques is being formulated for potential application in the nuclear power industry. Methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed and a review of current nuclear risk-dominant issues conducted. The need for a reliability approach to address dependent system failures, operating and emergency procedures and human performance, and develop a plant-specific performance data base for safety decision making is demonstrated. Current research has concentrated on developing a Reliability Program approach for the operating phase of a nuclear plant's lifecycle. The approach incorporates performance monitoring and evaluation activities with dedicated tasks that integrate these activities with operation, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the Reliability Program. (orig./HP)

  3. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  4. EDIN0613P weight estimating program. [for launch vehicles

    Science.gov (United States)

    Hirsch, G. N.

    1976-01-01

    The weight estimating relationships and program developed for space power system simulation are described. The program was developed to size a two-stage launch vehicle for the space power system. The program is actually part of an overall simulation technique called EDIN (Engineering Design and Integration) system. The program sizes the overall vehicle, generates major component weights and derives a large amount of overall vehicle geometry. The program is written in FORTRAN V and is designed for use on the Univac Exec 8 (1110). By utilizing the flexibility of this program while remaining cognizant of the limits imposed upon output depth and accuracy by utilization of generalized input, this program concept can be a useful tool for estimating purposes at the conceptual design stage of a launch vehicle.

  5. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    Science.gov (United States)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  6. KERNELHR: A program for estimating animal home ranges

    Science.gov (United States)

    Seaman, D.E.; Griffith, B.; Powell, R.A.

    1998-01-01

    Kernel methods are state of the art for estimating animal home-range area and utilization distribution (UD). The KERNELHR program was developed to provide researchers and managers a tool to implement this extremely flexible set of methods with many variants. KERNELHR runs interactively or from the command line on any personal computer (PC) running DOS. KERNELHR provides output of fixed and adaptive kernel home-range estimates, as well as density values in a format suitable for in-depth statistical and spatial analyses. An additional package of programs creates contour files for plotting in geographic information systems (GIS) and estimates core areas of ranges.

  7. Developmental Programming of Renal Function and Re-Programming Approaches.

    Science.gov (United States)

    Nüsken, Eva; Dötsch, Jörg; Weber, Lutz T; Nüsken, Kai-Dietrich

    2018-01-01

    Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to "re-program" renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin-angiotensin-aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated to human application. Early

  8. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  9. Developmental Programming of Renal Function and Re-Programming Approaches

    Directory of Open Access Journals (Sweden)

    Eva Nüsken

    2018-02-01

    Full Text Available Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to “re-program” renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin–angiotensin–aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated

  10. Developmental Programming of Renal Function and Re-Programming Approaches

    Science.gov (United States)

    Nüsken, Eva; Dötsch, Jörg; Weber, Lutz T.; Nüsken, Kai-Dietrich

    2018-01-01

    Chronic kidney disease affects more than 10% of the population. Programming studies have examined the interrelationship between environmental factors in early life and differences in morbidity and mortality between individuals. A number of important principles has been identified, namely permanent structural modifications of organs and cells, long-lasting adjustments of endocrine regulatory circuits, as well as altered gene transcription. Risk factors include intrauterine deficiencies by disturbed placental function or maternal malnutrition, prematurity, intrauterine and postnatal stress, intrauterine and postnatal overnutrition, as well as dietary dysbalances in postnatal life. This mini-review discusses critical developmental periods and long-term sequelae of renal programming in humans and presents studies examining the underlying mechanisms as well as interventional approaches to “re-program” renal susceptibility toward disease. Clinical manifestations of programmed kidney disease include arterial hypertension, proteinuria, aggravation of inflammatory glomerular disease, and loss of kidney function. Nephron number, regulation of the renin–angiotensin–aldosterone system, renal sodium transport, vasomotor and endothelial function, myogenic response, and tubuloglomerular feedback have been identified as being vulnerable to environmental factors. Oxidative stress levels, metabolic pathways, including insulin, leptin, steroids, and arachidonic acid, DNA methylation, and histone configuration may be significantly altered by adverse environmental conditions. Studies on re-programming interventions focused on dietary or anti-oxidative approaches so far. Further studies that broaden our understanding of renal programming mechanisms are needed to ultimately develop preventive strategies. Targeted re-programming interventions in animal models focusing on known mechanisms will contribute to new concepts which finally will have to be translated to human application

  11. Development of computer program for estimating decommissioning cost - 59037

    International Nuclear Information System (INIS)

    Kim, Hak-Soo; Park, Jong-Kil

    2012-01-01

    The programs for estimating the decommissioning cost have been developed for many different purposes and applications. The estimation of decommissioning cost is required a large amount of data such as unit cost factors, plant area and its inventory, waste treatment, etc. These make it difficult to use manual calculation or typical spreadsheet software such as Microsoft Excel. The cost estimation for eventual decommissioning of nuclear power plants is a prerequisite for safe, timely and cost-effective decommissioning. To estimate the decommissioning cost more accurately and systematically, KHNP, Korea Hydro and Nuclear Power Co. Ltd, developed a decommissioning cost estimating computer program called 'DeCAT-Pro', which is Decommission-ing Cost Assessment Tool - Professional. (Hereinafter called 'DeCAT') This program allows users to easily assess the decommissioning cost with various decommissioning options. Also, this program provides detailed reporting for decommissioning funding requirements as well as providing detail project schedules, cash-flow, staffing plan and levels, and waste volumes by waste classifications and types. KHNP is planning to implement functions for estimating the plant inventory using 3-D technology and for classifying the conditions of radwaste disposal and transportation automatically. (authors)

  12. NEWBOX: A computer program for parameter estimation in diffusion problems

    International Nuclear Information System (INIS)

    Nestor, C.W. Jr.; Godbee, H.W.; Joy, D.S.

    1989-01-01

    In the analysis of experiments to determine amounts of material transferred form 1 medium to another (e.g., the escape of chemically hazardous and radioactive materials from solids), there are at least 3 important considerations. These are (1) is the transport amenable to treatment by established mass transport theory; (2) do methods exist to find estimates of the parameters which will give a best fit, in some sense, to the experimental data; and (3) what computational procedures are available for evaluating the theoretical expressions. The authors have made the assumption that established mass transport theory is an adequate model for the situations under study. Since the solutions of the diffusion equation are usually nonlinear in some parameters (diffusion coefficient, reaction rate constants, etc.), use of a method of parameter adjustment involving first partial derivatives can be complicated and prone to errors in the computation of the derivatives. In addition, the parameters must satisfy certain constraints; for example, the diffusion coefficient must remain positive. For these reasons, a variant of the constrained simplex method of M. J. Box has been used to estimate parameters. It is similar, but not identical, to the downhill simplex method of Nelder and Mead. In general, they calculate the fraction of material transferred as a function of time from expressions obtained by the inversion of the Laplace transform of the fraction transferred, rather than by taking derivatives of a calculated concentration profile. With the above approaches to the 3 considerations listed at the outset, they developed a computer program NEWBOX, usable on a personal computer, to calculate the fractional release of material from 4 different geometrical shapes (semi-infinite medium, finite slab, finite circular cylinder, and sphere), accounting for several different boundary conditions

  13. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  14. The QUELCE Method: Using Change Drivers to Estimate Program Costs

    Science.gov (United States)

    2016-08-01

    Analysis 4 2.4 Assign Conditional Probabilities 5 2.5 Apply Uncertainty to Cost Formula Inputs for Scenarios 5 2.6 Perform Monte Carlo Simulation to...Distribution Statement A: Approved for Public Release; Distribution is Unlimited 1 Introduction: The Cost Estimation Challenge Because large-scale programs... challenged [Bliss 2012]. Improvements in cost estimation that would make these assumptions more precise and reduce early lifecycle uncertainty can

  15. Population Estimation with Mark and Recapture Method Program

    International Nuclear Information System (INIS)

    Limohpasmanee, W.; Kaewchoung, W.

    1998-01-01

    Population estimation is the important information which required for the insect control planning especially the controlling with SIT. Moreover, It can be used to evaluate the efficiency of controlling method. Due to the complexity of calculation, the population estimation with mark and recapture methods were not used widely. So that, this program is developed with Qbasic on the purpose to make it accuracy and easier. The program evaluation consists with 6 methods; follow Seber's, Jolly-seber's, Jackson's Ito's, Hamada's and Yamamura's methods. The results are compared with the original methods, found that they are accuracy and more easier to applied

  16. A fuzzy regression with support vector machine approach to the estimation of horizontal global solar radiation

    International Nuclear Information System (INIS)

    Baser, Furkan; Demirhan, Haydar

    2017-01-01

    Accurate estimation of the amount of horizontal global solar radiation for a particular field is an important input for decision processes in solar radiation investments. In this article, we focus on the estimation of yearly mean daily horizontal global solar radiation by using an approach that utilizes fuzzy regression functions with support vector machine (FRF-SVM). This approach is not seriously affected by outlier observations and does not suffer from the over-fitting problem. To demonstrate the utility of the FRF-SVM approach in the estimation of horizontal global solar radiation, we conduct an empirical study over a dataset collected in Turkey and applied the FRF-SVM approach with several kernel functions. Then, we compare the estimation accuracy of the FRF-SVM approach to an adaptive neuro-fuzzy system and a coplot supported-genetic programming approach. We observe that the FRF-SVM approach with a Gaussian kernel function is not affected by both outliers and over-fitting problem and gives the most accurate estimates of horizontal global solar radiation among the applied approaches. Consequently, the use of hybrid fuzzy functions and support vector machine approaches is found beneficial in long-term forecasting of horizontal global solar radiation over a region with complex climatic and terrestrial characteristics. - Highlights: • A fuzzy regression functions with support vector machines approach is proposed. • The approach is robust against outlier observations and over-fitting problem. • Estimation accuracy of the model is superior to several existent alternatives. • A new solar radiation estimation model is proposed for the region of Turkey. • The model is useful under complex terrestrial and climatic conditions.

  17. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  18. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  19. Estimating the Impact of Low-Income Universal Service Programs

    OpenAIRE

    Daniel A. Ackerberg; David R. DeRemer; Michael H. Riordan; Gregory L. Rosston; Bradley S. Wimmer

    2013-01-01

    This policy study uses U.S. Census microdata to evaluate how subsidies for universal telephone service vary in their impact across low-income racial groups, gender, age, and home ownership. Our demand specification includes both the subsidized monthly price (Lifeline program) and the subsidized initial connection price (Linkup program) for local telephone service. Our quasi-maximum likelihood estimation controls for location differences and instruments for price endogeneity. The microdata all...

  20. Estimating variability in functional images using a synthetic resampling approach

    International Nuclear Information System (INIS)

    Maitra, R.; O'Sullivan, F.

    1996-01-01

    Functional imaging of biologic parameters like in vivo tissue metabolism is made possible by Positron Emission Tomography (PET). Many techniques, such as mixture analysis, have been suggested for extracting such images from dynamic sequences of reconstructed PET scans. Methods for assessing the variability in these functional images are of scientific interest. The nonlinearity of the methods used in the mixture analysis approach makes analytic formulae for estimating variability intractable. The usual resampling approach is infeasible because of the prohibitive computational effort in simulating a number of sinogram. datasets, applying image reconstruction, and generating parametric images for each replication. Here we introduce an approach that approximates the distribution of the reconstructed PET images by a Gaussian random field and generates synthetic realizations in the imaging domain. This eliminates the reconstruction steps in generating each simulated functional image and is therefore practical. Results of experiments done to evaluate the approach on a model one-dimensional problem are very encouraging. Post-processing of the estimated variances is seen to improve the accuracy of the estimation method. Mixture analysis is used to estimate functional images; however, the suggested approach is general enough to extend to other parametric imaging methods

  1. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  2. A variational approach to parameter estimation in ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Kaschek Daniel

    2012-08-01

    Full Text Available Abstract Background Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. Results The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. Conclusions The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  3. A variational approach to parameter estimation in ordinary differential equations.

    Science.gov (United States)

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  4. Fuzzy Multi-objective Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    Amna Rehmat

    2007-07-01

    Full Text Available Traveling salesman problem (TSP is one of the challenging real-life problems, attracting researchers of many fields including Artificial Intelligence, Operations Research, and Algorithm Design and Analysis. The problem has been well studied till now under different headings and has been solved with different approaches including genetic algorithms and linear programming. Conventional linear programming is designed to deal with crisp parameters, but information about real life systems is often available in the form of vague descriptions. Fuzzy methods are designed to handle vague terms, and are most suited to finding optimal solutions to problems with vague parameters. Fuzzy multi-objective linear programming, an amalgamation of fuzzy logic and multi-objective linear programming, deals with flexible aspiration levels or goals and fuzzy constraints with acceptable deviations. In this paper, a methodology, for solving a TSP with imprecise parameters, is deployed using fuzzy multi-objective linear programming. An example of TSP with multiple objectives and vague parameters is discussed.

  5. HEDPIN: a computer program to estimate pinwise power density

    International Nuclear Information System (INIS)

    Cappiello, M.W.

    1976-05-01

    A description is given of the digital computer program, HEDPIN. This program, modeled after a previously developed program, POWPIN, provides a means of estimating the pinwise power density distribution in fast reactor triangular pitched pin bundles. The capability also exists for computing any reaction rate of interest at the respective pin positions within an assembly. HEDPIN was developed in support of FTR fuel and test management as well as fast reactor core design and core characterization planning and analysis. The results of a test devised to check out HEDPIN's computational method are given, and the realm of application is discussed. Nearly all programming is in FORTRAN IV. Variable dimensioning is employed to make efficient use of core memory and maintain short running time for small problems. Input instructions, sample problem, and a program listing are also given

  6. SECPOP90: Sector population, land fraction, and economic estimation program

    Energy Technology Data Exchange (ETDEWEB)

    Humphreys, S.L.; Rollstin, J.A.; Ridgely, J.N.

    1997-09-01

    In 1973 Mr. W. Athey of the Environmental Protection Agency wrote a computer program called SECPOP which calculated population estimates. Since that time, two things have changed which suggested the need for updating the original program - more recent population censuses and the widespread use of personal computers (PCs). The revised computer program uses the 1990 and 1992 Population Census information and runs on current PCs as {open_quotes}SECPOP90.{close_quotes} SECPOP90 consists of two parts: site and regional. The site provides population and economic data estimates for any location within the continental United States. Siting analysis is relatively fast running. The regional portion assesses site availability for different siting policy decisions; i.e., the impact of available sites given specific population density criteria within the continental United States. Regional analysis is slow. This report compares the SECPOP90 population estimates and the nuclear power reactor licensee-provided information. Although the source, and therefore the accuracy, of the licensee information is unknown, this comparison suggests SECPOP90 makes reasonable estimates. Given the total uncertainty in any current calculation of severe accidents, including the potential offsite consequences, the uncertainty within SECPOP90 population estimates is expected to be insignificant. 12 refs., 55 figs., 7 tabs.

  7. SECPOP90: Sector population, land fraction, and economic estimation program

    International Nuclear Information System (INIS)

    Humphreys, S.L.; Rollstin, J.A.; Ridgely, J.N.

    1997-09-01

    In 1973 Mr. W. Athey of the Environmental Protection Agency wrote a computer program called SECPOP which calculated population estimates. Since that time, two things have changed which suggested the need for updating the original program - more recent population censuses and the widespread use of personal computers (PCs). The revised computer program uses the 1990 and 1992 Population Census information and runs on current PCs as open-quotes SECPOP90.close quotes SECPOP90 consists of two parts: site and regional. The site provides population and economic data estimates for any location within the continental United States. Siting analysis is relatively fast running. The regional portion assesses site availability for different siting policy decisions; i.e., the impact of available sites given specific population density criteria within the continental United States. Regional analysis is slow. This report compares the SECPOP90 population estimates and the nuclear power reactor licensee-provided information. Although the source, and therefore the accuracy, of the licensee information is unknown, this comparison suggests SECPOP90 makes reasonable estimates. Given the total uncertainty in any current calculation of severe accidents, including the potential offsite consequences, the uncertainty within SECPOP90 population estimates is expected to be insignificant. 12 refs., 55 figs., 7 tabs

  8. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  9. Comparative study of approaches to estimate pipe break frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K.; Pulkkinen, U.; Talja, H.; Saarenheimo, A.; Karjalainen-Roikonen, P. [VTT Industrial Systems (Finland)

    2002-12-01

    The report describes the comparative study of two approaches to estimate pipe leak and rupture frequencies for piping. One method is based on a probabilistic fracture mechanistic (PFM) model while the other one is based on statistical estimation of rupture frequencies from a large database. In order to be able to compare the approaches and their results, the rupture frequencies of some selected welds have been estimated using both of these methods. This paper highlights the differences both in methods, input data, need and use of plant specific information and need of expert judgement. The study focuses on one specific degradation mechanism, namely the intergranular stress corrosion cracking (IGSCC). This is the major degradation mechanism in old stainless steel piping in BWR environment, and its growth is influenced by material properties, stresses and water chemistry. (au)

  10. Optimizing denominator data estimation through a multimodel approach

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2014-05-01

    Full Text Available To assess the risk of (zoonotic disease transmission in developing countries, decision makers generally rely on distribution estimates of animals from survey records or projections of historical enumeration results. Given the high cost of large-scale surveys, the sample size is often restricted and the accuracy of estimates is therefore low, especially when spatial high-resolution is applied. This study explores possibilities of improving the accuracy of livestock distribution maps without additional samples using spatial modelling based on regression tree forest models, developed using subsets of the Uganda 2008 Livestock Census data, and several covariates. The accuracy of these spatial models as well as the accuracy of an ensemble of a spatial model and direct estimate was compared to direct estimates and “true” livestock figures based on the entire dataset. The new approach is shown to effectively increase the livestock estimate accuracy (median relative error decrease of 0.166-0.037 for total sample sizes of 80-1,600 animals, respectively. This outcome suggests that the accuracy levels obtained with direct estimates can indeed be achieved with lower sample sizes and the multimodel approach presented here, indicating a more efficient use of financial resources.

  11. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  12. Approaches to estimating the universe of natural history collections data

    Directory of Open Access Journals (Sweden)

    Arturo H. Ariño

    2010-10-01

    Full Text Available This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in Natural History Collections around the world, in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units might remain to be mobilized. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, figuring out the occult sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates range from 1.2 to 2.1 gigaunits, of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help narrow the estimate. In particular, unknown collections not having emerged through literature are the major source of uncertainty.

  13. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  14. Controller design approach based on linear programming.

    Science.gov (United States)

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  15. Temporally stratified sampling programs for estimation of fish impingement

    International Nuclear Information System (INIS)

    Kumar, K.D.; Griffith, J.S.

    1977-01-01

    Impingement monitoring programs often expend valuable and limited resources and fail to provide a dependable estimate of either total annual impingement or those biological and physicochemical factors affecting impingement. In situations where initial monitoring has identified ''problem'' fish species and the periodicity of their impingement, intensive sampling during periods of high impingement will maximize information obtained. We use data gathered at two nuclear generating facilities in the southeastern United States to discuss techniques of designing such temporally stratified monitoring programs and their benefits and drawbacks. Of the possible temporal patterns in environmental factors within a calendar year, differences among seasons are most influential in the impingement of freshwater fishes in the Southeast. Data on the threadfin shad (Dorosoma petenense) and the role of seasonal temperature changes are utilized as an example to demonstrate ways of most efficiently and accurately estimating impingement of the species

  16. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    Science.gov (United States)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  17. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    Science.gov (United States)

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  18. School District Program Cost Accounting: An Alternative Approach

    Science.gov (United States)

    Hentschke, Guilbert C.

    1975-01-01

    Discusses the value for school districts of a program cost accounting system and examines different approaches to generating program cost data, with particular emphasis on the "cost allocation to program system" (CAPS) and the traditional "transaction-based system." (JG)

  19. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  20. Estimation of mean-reverting oil prices: a laboratory approach

    International Nuclear Information System (INIS)

    Bjerksund, P.; Stensland, G.

    1993-12-01

    Many economic decision support tools developed for the oil industry are based on the future oil price dynamics being represented by some specified stochastic process. To meet the demand for necessary data, much effort is allocated to parameter estimation based on historical oil price time series. The approach in this paper is to implement a complex future oil market model, and to condense the information from the model to parameter estimates for the future oil price. In particular, we use the Lensberg and Rasmussen stochastic dynamic oil market model to generate a large set of possible future oil price paths. Given the hypothesis that the future oil price is generated by a mean-reverting Ornstein-Uhlenbeck process, we obtain parameter estimates by a maximum likelihood procedure. We find a substantial degree of mean-reversion in the future oil price, which in some of our decision examples leads to an almost negligible value of flexibility. 12 refs., 2 figs., 3 tabs

  1. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    OpenAIRE

    Adrian Jinich; Dmitrij Rappoport; Ian Dunn; Benjamin Sanchez-Lengeling; Roberto Olivares-Amaya; Elad Noor; Arren Bar Even; Alán Aspuru-Guzik

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfe...

  2. An approach of parameter estimation for non-synchronous systems

    International Nuclear Information System (INIS)

    Xu Daolin; Lu Fangfang

    2005-01-01

    Synchronization-based parameter estimation is simple and effective but only available to synchronous systems. To come over this limitation, we propose a technique that the parameters of an unknown physical process (possibly a non-synchronous system) can be identified from a time series via a minimization procedure based on a synchronization control. The feasibility of this approach is illustrated in several chaotic systems

  3. A remark on empirical estimates in multistage stochastic programming

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta

    2002-01-01

    Roč. 9, č. 17 (2002), s. 31-50 ISSN 1212-074X R&D Projects: GA ČR GA402/01/0539; GA ČR GA402/02/1015; GA ČR GA402/01/0034 Institutional research plan: CEZ:AV0Z1075907 Keywords : multistage stochastic programming * empirical estimates * Markov dependence Subject RIV: BB - Applied Statistics, Operational Research

  4. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  5. A linear programming approach for placement of applicants to academic programs.

    Science.gov (United States)

    Kassa, Biniyam Asmare

    2013-01-01

    This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the new approach allows the college's management to easily incorporate additional placement criteria, if needed. Comparison of our approach against manually constructed placement decisions based on actual data for the 2012/13 academic year suggested that about 93 percent of the placements from our model concur with the actual placement decisions. For the remaining 7 percent of placements, however, the actual placements made by the manual system display inconsistencies of decisions judged against the very criteria intended to guide placement decisions by the college's program management office. Overall, the new approach proves to be a significant improvement over the manual system in terms of efficiency of the placement process and the quality of placement decisions.

  6. Different approaches to estimation of reactor pressure vessel material embrittlement

    Directory of Open Access Journals (Sweden)

    V. M. Revka

    2013-03-01

    Full Text Available The surveillance test data for the nuclear power plant which is under operation in Ukraine have been used to estimate WWER-1000 reactor pressure vessel (RPV material embrittlement. The beltline materials (base and weld metal were characterized using Charpy impact and fracture toughness test methods. The fracture toughness test data were analyzed according to the standard ASTM 1921-05. The pre-cracked Charpy specimens were tested to estimate a shift of reference temperature T0 due to neutron irradiation. The maximum shift of reference temperature T0 is 84 °C. A radiation embrittlement rate AF for the RPV material was estimated using fracture toughness test data. In addition the AF factor based on the Charpy curve shift (ΔTF has been evaluated. A comparison of the AF values estimated according to different approaches has shown there is a good agreement between the radiation shift of Charpy impact and fracture toughness curves for weld metal with high nickel content (1,88 % wt. Therefore Charpy impact test data can be successfully applied to estimate the fracture toughness curve shift and therefore embrittlement rate. Furthermore it was revealed that radiation embrittlement rate for weld metal is higher than predicted by a design relationship. The enhanced embrittlement is most probably related to simultaneously high nickel and high manganese content in weld metal.

  7. Improved stove programs need robust methods to estimate carbon offsets

    OpenAIRE

    Johnson, Michael; Edwards, Rufus; Masera, Omar

    2010-01-01

    Current standard methods result in significant discrepancies in carbon offset accounting compared to approaches based on representative community based subsamples, which provide more realistic assessments at reasonable cost. Perhaps more critically, neither of the currently approved methods incorporates uncertainties inherent in estimates of emission factors or non-renewable fuel usage (fNRB). Since emission factors and fNRB contribute 25% and 47%, respectively, to the overall uncertainty in ...

  8. A holistic approach to age estimation in refugee children.

    Science.gov (United States)

    Sypek, Scott A; Benson, Jill; Spanner, Kate A; Williams, Jan L

    2016-06-01

    Many refugee children arriving in Australia have an inaccurately documented date of birth (DOB). A medical assessment of a child's age is often requested when there is a concern that their documented DOB is incorrect. This study's aim was to assess the accuracy a holistic age assessment tool (AAT) in estimating the age of refugee children newly settled in Australia. A holistic AAT that combines medical and non-medical approaches was used to estimate the ages of 60 refugee children with a known DOB. The tool used four components to assess age: an oral narrative, developmental assessment, anthropometric measures and pubertal assessment. Assessors were blinded to the true age of the child. Correlation coefficients for the actual and estimated age were calculated for the tool overall and individual components. The correlation coefficient between the actual and estimated age from the AAT was very strong at 0.9802 (boys 0.9748, girls 0.9876). The oral narrative component of the tool performed best (R = 0.9603). Overall, 86.7% of age estimates were within 1 year of the true age. The range of differences was -1.43 to 3.92 years with a standard deviation of 0.77 years (9.24 months). The AAT is a holistic, simple and safe instrument that can be used to estimate age in refugee children with results comparable with radiological methods currently used. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  9. METHODICAL APPROACH TO AN ESTIMATION OF PROFESSIONALISM OF AN EMPLOYEE

    Directory of Open Access Journals (Sweden)

    Татьяна Александровна Коркина

    2013-08-01

    Full Text Available Analysis of definitions of «professionalism», reflecting the different viewpoints of scientists and practitioners, has shown that it is interpreted as a specific property of the people effectively and reliably carry out labour activity in a variety of conditions. The article presents the methodical approach to an estimation of professionalism of the employee from the position as the external manifestations of the reliability and effectiveness of the work and the position of the personal characteristics of the employee, determining the results of his work. This approach includes the assessment of the level of qualification and motivation of the employee for each key job functions as well as the final results of its implementation on the criteria of efficiency and reliability. The proposed methodological approach to the estimation of professionalism of the employee allows to identify «bottlenecks» in the structure of its labour functions and to define directions of development of the professional qualities of the worker to ensure the required level of reliability and efficiency of the obtained results.DOI: http://dx.doi.org/10.12731/2218-7405-2013-6-11

  10. MODERN APPROACHES TO INTELLECTUAL PROPERTY COST ESTIMATION UNDER CRISIS CONDITIONS FROM CONSUMER QUALITY PRESERVATION VIEWPOINT

    Directory of Open Access Journals (Sweden)

    I. N. Alexandrov

    2011-01-01

    Full Text Available Various intellectual property (IP estimation approaches and innovations in this field are discussed. Problem situations and «bottlenecks» in the economic mechanism of transformation of innovations into useful products and services are defined. Main international IP evaluation methods are described, particular attention being paid to «Quick Inside» program defined as latest generation global expert system. IP income and expense evaluation methods used in domestic practice are discussed. Possibility of using the Black-Scholes optional model to estimate costs of non-material assets is studied.

  11. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    International Nuclear Information System (INIS)

    Imandi, Venkataramana; Chatterjee, Abhijit

    2016-01-01

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  12. Estimating Arrhenius parameters using temperature programmed molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Imandi, Venkataramana; Chatterjee, Abhijit, E-mail: abhijit@che.iitb.ac.in [Department of Chemical Engineering, Indian Institute of Technology Bombay, Mumbai 400076 (India)

    2016-07-21

    Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.

  13. Simplified approach for estimating large early release frequency

    International Nuclear Information System (INIS)

    Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.

    1998-04-01

    The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB

  14. Survey of engineering computational methods and experimental programs for estimating supersonic missile aerodynamic characteristics

    Science.gov (United States)

    Sawyer, W. C.; Allen, J. M.; Hernandez, G.; Dillenius, M. F. E.; Hemsch, M. J.

    1982-01-01

    This paper presents a survey of engineering computational methods and experimental programs used for estimating the aerodynamic characteristics of missile configurations. Emphasis is placed on those methods which are suitable for preliminary design of conventional and advanced concepts. An analysis of the technical approaches of the various methods is made in order to assess their suitability to estimate longitudinal and/or lateral-directional characteristics for different classes of missile configurations. Some comparisons between the predicted characteristics and experimental data are presented. These comparisons are made for a large variation in flow conditions and model attitude parameters. The paper also presents known experimental research programs developed for the specific purpose of validating analytical methods and extending the capability of data-base programs.

  15. Cost estimation model for advanced planetary programs, fourth edition

    Science.gov (United States)

    Spadoni, D. J.

    1983-01-01

    The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.

  16. A new approach for estimating the density of liquids.

    Science.gov (United States)

    Sakagami, T; Fuchizaki, K; Ohara, K

    2016-10-05

    We propose a novel approach with which to estimate the density of liquids. The approach is based on the assumption that the systems would be structurally similar when viewed at around the length scale (inverse wavenumber) of the first peak of the structure factor, unless their thermodynamic states differ significantly. The assumption was implemented via a similarity transformation to the radial distribution function to extract the density from the structure factor of a reference state with a known density. The method was first tested using two model liquids, and could predict the densities within an error of several percent unless the state in question differed significantly from the reference state. The method was then applied to related real liquids, and satisfactory results were obtained for predicted densities. The possibility of applying the method to amorphous materials is discussed.

  17. An efficient algebraic approach to observability analysis in state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)

    2010-03-15

    An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)

  18. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    An integrated approach to estimate the storage reliability is proposed. • A non-parametric measure to estimate the number of failures and the reliability at each testing time is presented. • E-Baysian method to estimate the failure probability is introduced. • The possible initial failures in storage are introduced. • The non-parametric estimates of failure numbers can be used into the parametric models.

  19. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    Science.gov (United States)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  20. A Novel Rules Based Approach for Estimating Software Birthmark

    Science.gov (United States)

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  1. A filtering approach to edge preserving MAP estimation of images.

    Science.gov (United States)

    Humphrey, David; Taubman, David

    2011-05-01

    The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.

  2. Unsteady force estimation using a Lagrangian drift-volume approach

    Science.gov (United States)

    McPhaden, Cameron J.; Rival, David E.

    2018-04-01

    A novel Lagrangian force estimation technique for unsteady fluid flows has been developed, using the concept of a Darwinian drift volume to measure unsteady forces on accelerating bodies. The construct of added mass in viscous flows, calculated from a series of drift volumes, is used to calculate the reaction force on an accelerating circular flat plate, containing highly-separated, vortical flow. The net displacement of fluid contained within the drift volumes is, through Darwin's drift-volume added-mass proposition, equal to the added mass of the plate and provides the reaction force of the fluid on the body. The resultant unsteady force estimates from the proposed technique are shown to align with the measured drag force associated with a rapid acceleration. The critical aspects of understanding unsteady flows, relating to peak and time-resolved forces, often lie within the acceleration phase of the motions, which are well-captured by the drift-volume approach. Therefore, this Lagrangian added-mass estimation technique opens the door to fluid-dynamic analyses in areas that, until now, were inaccessible by conventional means.

  3. Estimation of stature from hand impression: a nonconventional approach.

    Science.gov (United States)

    Ahemad, Nasir; Purkait, Ruma

    2011-05-01

    Stature is used for constructing a biological profile that assists with the identification of an individual. So far, little attention has been paid to the fact that stature can be estimated from hand impressions left at scene of crime. The present study based on practical observations adopted a new methodology of measuring hand length from the depressed area between hypothenar and thenar region on the proximal surface of the palm. Stature and bilateral hand impressions were obtained from 503 men of central India. Seventeen dimensions of hand were measured on the impression. Linear regression equations derived showed hand length followed by palm length are best estimates of stature. Testing the practical utility of the suggested method on latent prints of 137 subjects, a statistically insignificant result was obtained when known and estimated stature derived from latent prints was compared. The suggested approach points to a strong possibility of its usage in crime scene investigation, albeit the fact that validation studies in real-life scenarios are performed. © 2011 American Academy of Forensic Sciences.

  4. Sensitivity of Technical Efficiency Estimates to Estimation Methods: An Empirical Comparison of Parametric and Non-Parametric Approaches

    OpenAIRE

    de-Graft Acquah, Henry

    2014-01-01

    This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukey’s test sugge...

  5. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    Science.gov (United States)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  6. An MDE Approach for Modular Program Analyses

    NARCIS (Netherlands)

    Yildiz, Bugra Mehmet; Bockisch, Christoph; Aksit, Mehmet; Rensink, Arend

    Program analyses are an important tool to check if a system fulfills its specification. A typical implementation strategy for program analyses is to use an imperative, general-purpose language like Java, and access the program to be analyzed through libraries that offer an API for reading, writing

  7. Cognitive agent programming : A semantic approach

    NARCIS (Netherlands)

    Riemsdijk, M.B. van

    2006-01-01

    In this thesis we are concerned with the design and investigation of dedicated programming languages for programming agents. We focus in particular on programming languages for rational agents, i.e., flexibly behaving computing entities that are able to make "good" decisions about what to do. An

  8. Latent degradation indicators estimation and prediction: A Monte Carlo approach

    Science.gov (United States)

    Zhou, Yifan; Sun, Yong; Mathew, Joseph; Wolff, Rodney; Ma, Lin

    2011-01-01

    Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

  9. Which Introductory Programming Approach Is Most Suitable for Students: Procedural or Visual Programming?

    Science.gov (United States)

    Eid, Chaker; Millham, Richard

    2012-01-01

    In this paper, we discuss the visual programming approach to teaching introductory programming courses and then compare this approach with that of procedural programming. The involved cognitive levels of students, as beginning students are introduced to different types of programming concepts, are correlated to the learning processes of…

  10. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    Science.gov (United States)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  11. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  12. Airline loyalty (programs) across borders : A geographic discontinuity approach

    NARCIS (Netherlands)

    de Jong, Gerben; Behrens, Christiaan; van Ommeren, Jos

    2018-01-01

    We analyze brand loyalty advantages of national airlines in their domestic countries using geocoded data from a major international frequent flier program. We employ a geographic discontinuity design that estimates discontinuities in program activity at the national borders of the program's

  13. 75 FR 16120 - Notice of Issuance of Exposure Draft on Accrual Estimates for Grant Programs

    Science.gov (United States)

    2010-03-31

    ... FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Notice of Issuance of Exposure Draft on Accrual Estimates for Grant Programs AGENCY: Federal Accounting Standards Advisory Board. ACTION: Notice. Board... Accounting Technical Release entitled Accrual Estimates for Grant Programs. The proposed Technical Release...

  14. Estimating plant root water uptake using a neural network approach

    DEFF Research Database (Denmark)

    Qiao, D M; Shi, H B; Pang, H B

    2010-01-01

    but has not yet been addressed. This paper presents and tests such an approach. The method is based on a neural network model, estimating the water uptake using different types of data that are easy to measure in the field. Sunflower grown in a sandy loam subjected to water stress and salinity was taken......Water uptake by plant roots is an important process in the hydrological cycle, not only for plant growth but also for the role it plays in shaping microbial community and bringing in physical and biochemical changes to soils. The ability of roots to extract water is determined by combined soil...... and plant characteristics, and how to model it has been of interest for many years. Most macroscopic models for water uptake operate at soil profile scale under the assumption that the uptake rate depends on root density and soil moisture. Whilst proved appropriate, these models need spatio-temporal root...

  15. Static models, recursive estimators and the zero-variance approach

    KAUST Repository

    Rubino, Gerardo

    2016-01-07

    When evaluating dependability aspects of complex systems, most models belong to the static world, where time is not an explicit variable. These models suffer from the same problems than dynamic ones (stochastic processes), such as the frequent combinatorial explosion of the state spaces. In the Monte Carlo domain, on of the most significant difficulties is the rare event situation. In this talk, we describe this context and a recent technique that appears to be at the top performance level in the area, where we combined ideas that lead to very fast estimation procedures with another approach called zero-variance approximation. Both ideas produced a very efficient method that has the right theoretical property concerning robustness, the Bounded Relative Error one. Some examples illustrate the results.

  16. A Novel Approach for Collaborative Pair Programming

    Science.gov (United States)

    Goel, Sanjay; Kathuria, Vanshi

    2010-01-01

    The majority of an engineer's time in the software industry is spent working with other programmers. Agile methods of software development like eXtreme Programming strongly rely upon practices like daily meetings and pair programming. Hence, the need to learn the skill of working collaboratively is of primary importance for software developers.…

  17. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  18. A new approach to Ozone Depletion Potential (ODP) estimation

    Science.gov (United States)

    Portmann, R. W.; Daniel, J. S.; Yu, P.

    2017-12-01

    The Ozone Depletion Potential (ODP) is given by the time integrated global ozone loss of an ozone depleting substance (ODS) relative to a reference ODS (usually CFC-11). The ODP is used by the Montreal Protocol (and subsequent amendments) to inform policy decisions on the production of ODSs. Since the early 1990s, ODPs have usually been estimated using an approximate formulism that utilizes the lifetime and the fractional release factor of the ODS. This has the advantage that it can utilize measured concentrations of the ODSs to estimate their fractional release factors. However, there is a strong correlation between stratospheric lifetimes and fractional release factors of ODSs and that this can introduce uncertainties into ODP calculations when the terms are estimated independently. Instead, we show that the ODP is proportional to the average global ozone loss per equivalent chlorine molecule released in the stratosphere by the ODS loss process (which we call the Γ factor) and, importantly, this ratio varies only over a relatively small range ( 0.3-1.5) for ODPs with stratospheric lifetimes of 20 to more than 1,000 years. The Γ factor varies smoothly with stratospheric lifetime for ODSs with loss processes dominated by photolysis and is larger for long-lived species, while stratospheric OH loss processes produce relatively small Γs that are nearly independent of stratospheric lifetime. The fractional release approach does not accurately capture these relationships. We propose a new formulation that takes advantage of this smooth variation by parameterizing the Γ factor using ozone changes computed using the chemical climate model CESM-WACCM and the NOCAR two-dimensional model. We show that while the absolute Γ's vary between WACCM and NOCAR models, much of the difference is removed for the Γ/ΓCFC-11 ratio that is used in the ODP formula. This parameterized method simplifies the computation of ODPs while providing enhanced accuracy compared to the

  19. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    Science.gov (United States)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  20. Estimating intervention effects of prevention programs: accounting for noncompliance.

    Science.gov (United States)

    Stuart, Elizabeth A; Perry, Deborah F; Le, Huynh-Nhu; Ialongo, Nicholas S

    2008-12-01

    Individuals not fully complying with their assigned treatments is a common problem encountered in randomized evaluations of behavioral interventions. Treatment group members rarely attend all sessions or do all "required" activities; control group members sometimes find ways to participate in aspects of the intervention. As a result, there is often interest in estimating both the effect of being assigned to participate in the intervention, as well as the impact of actually participating and doing all of the required activities. Methods known broadly as "complier average causal effects" (CACE) or "instrumental variables" (IV) methods have been developed to estimate this latter effect, but they are more commonly applied in medical and treatment research. Since the use of these statistical techniques in prevention trials has been less widespread, many prevention scientists may not be familiar with the underlying assumptions and limitations of CACE and IV approaches. This paper provides an introduction to these methods, described in the context of randomized controlled trials of two preventive interventions: one for perinatal depression among at-risk women and the other for aggressive disruptive behavior in children. Through these case studies, the underlying assumptions and limitations of these methods are highlighted.

  1. The cost of crime to society: new crime-specific estimates for policy and program evaluation.

    Science.gov (United States)

    McCollister, Kathryn E; French, Michael T; Fang, Hai

    2010-04-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  3. Multivariate Location Estimation Using Extension of $R$-Estimates Through $U$-Statistics Type Approach

    OpenAIRE

    Chaudhuri, Probal

    1992-01-01

    We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...

  4. PSHED: a simplified approach to developing parallel programs

    International Nuclear Information System (INIS)

    Mahajan, S.M.; Ramesh, K.; Rajesh, K.; Somani, A.; Goel, M.

    1992-01-01

    This paper presents a simplified approach in the forms of a tree structured computational model for parallel application programs. An attempt is made to provide a standard user interface to execute programs on BARC Parallel Processing System (BPPS), a scalable distributed memory multiprocessor. The interface package called PSHED provides a basic framework for representing and executing parallel programs on different parallel architectures. The PSHED package incorporates concepts from a broad range of previous research in programming environments and parallel computations. (author). 6 refs

  5. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  6. A Practical Approach to Program Evaluation.

    Science.gov (United States)

    Lee, Linda J.; Sampson, John F.

    1990-01-01

    The Research and Evaluation Support Services Unit of the New South Wales (Australia) Department of Education conducts program evaluations to provide information to senior management for decision making. The 10-step system used is described, which provides for planning, evaluation, and staff development. (TJH)

  7. Fuzzy linear programming approach for solving transportation

    Indian Academy of Sciences (India)

    Transportation problem (TP) is an important network structured linear programming problem that arises in several contexts and has deservedly received a great deal of attention in the literature. The central concept in this problem is to find the least total transportation cost of a commodity in order to satisfy demands at ...

  8. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-01

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed

  9. Low-income DSM Programs: Methodological approach to determining the cost-effectiveness of coordinated partnerships

    Energy Technology Data Exchange (ETDEWEB)

    Brown, M.A.; Hill, L.J.

    1994-05-01

    As governments at all levels become increasingly budget-conscious, expenditures on low-income, demand-side management (DSM) programs are being evaluated more on the basis of efficiency at the expense of equity considerations. Budgetary pressures have also caused government agencies to emphasize resource leveraging and coordination with electric and gas utilities as a means of sharing the expenses of low-income programs. The increased involvement of electric and gas utilities in coordinated low-income DSM programs, in turn, has resulted in greater emphasis on estimating program cost-effectiveness. The objective of this study is to develop a methodological approach to estimate the cost- effectiveness of coordinated low-income DSM programs, given the special features that distinguish these programs from other utility-operated DSM programs. The general approach used in this study was to (1) select six coordinated low-income DSM programs from among those currently operating across the United States, (2) examine the main features of these programs, and (3) determine the conceptual and pragmatic problems associated with estimating their cost-effectiveness. Three types of coordination between government and utility cosponsors were identified. At one extreme, local agencies operate {open_quotes}parallel{close_quotes} programs, each of which is fully funded by a single sponsor (e.g., one funded by the U.S. Department of Energy and the other by a utility). At the other extreme are highly {open_quotes}coupled{close_quotes} programs that capitalize on the unique capabilities and resources offered by each cosponsor. In these programs, agencies employ a combination of utility and government funds to deliver weatherization services as part of an integrated effort. In between are {open_quotes}supplemental{close_quotes} programs that utilize resources to supplement the agency`s government-funded weatherization, with no changes to the operation of that program.

  10. Avoided cost estimation and post-reform funding allocation for California's energy efficiency programs

    International Nuclear Information System (INIS)

    Baskette, C.; Horii, B.; Price, S.; Kollman, E.

    2006-01-01

    This paper summarizes the first comprehensive estimation of California's electricity avoided costs since the state reformed its electricity market. It describes avoided cost estimates that vary by time and location, thus facilitating targeted design, funding, and marketing of demand-side management (DSM) and energy efficiency (EE) programs that could not have occurred under the previous methodology of system average cost estimation. The approach, data, and results reflect two important market structure changes: (a) wholesale spot and forward markets now supply electricity commodities to load serving entities; and (b) the evolution of an emissions market that internalizes and prices some of the externalities of electricity generation. The paper also introduces the multiplier effect of a price reduction due to DSM/EE implementation on electricity bills of all consumers. It affirms that area- and time-specific avoided cost estimates can improve the allocation of the state's public funding for DSM/EE programs, a finding that could benefit other parts of North America (e.g. Ontario and New York), which have undergone electricity deregulation. (author)

  11. MoisturEC: A New R Program for Moisture Content Estimation from Electrical Conductivity Data.

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D; Werkema, Dale; Lane, John W

    2018-03-06

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data-analysis tools are needed to "translate" geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user-friendly tools are required to fully capitalize on the potential of geophysical information for soil-moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two- and three-dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  12. MoisturEC: a new R program for moisture content estimation from electrical conductivity data

    Science.gov (United States)

    Terry, Neil; Day-Lewis, Frederick D.; Werkema, Dale D.; Lane, John W.

    2018-01-01

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data‐analysis tools are needed to “translate” geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user‐friendly tools are required to fully capitalize on the potential of geophysical information for soil‐moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two‐ and three‐dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach.

  13. Sediment Analysis Using a Structured Programming Approach

    Directory of Open Access Journals (Sweden)

    Daniela Arias-Madrid

    2012-12-01

    Full Text Available This paper presents an algorithm designed for the analysis of a sedimentary sample of unconsolidated material and seeks to identify very quickly the main features that occur in a sediment and thus classify them fast and efficiently. For this purpose, it requires that the weight of each particle size to be entered in the program and using the method of Moments, which is based on four equations representing the mean, standard deviation, skewness and kurtosis, is found the attributes of the sample in few seconds. With the program these calculations are performed in an effective and more accurately way, obtaining also the explanations of the results of the features such as grain size, sorting, symmetry and origin, which helps to improve the study of sediments and in general the study of sedimentary rocks.

  14. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    Science.gov (United States)

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  15. Nuclear waste repository characterization: a spatial estimation/identification approach

    International Nuclear Information System (INIS)

    Candy, J.V.; Mao, N.

    1981-03-01

    This paper considers the application of spatial estimation techniques to a groundwater aquifer and geological borehole data. It investigates the adequacy of these techniques to reliably develop contour maps from various data sets. The practice of spatial estimation is discussed and the estimator is then applied to a groundwater aquifer system and a deep geological formation. It is shown that the various statistical models must first be identified from the data and evaluated before reasonable results can be expected

  16. A fuel-based approach to estimating motor vehicle exhaust emissions

    Science.gov (United States)

    Singer, Brett Craig

    in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  17. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  18. Estimating High-Frequency Based (Co-) Variances: A Unified Approach

    DEFF Research Database (Denmark)

    Voev, Valeri; Nolte, Ingmar

    We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform, in terms of the root mean squared error criterion, the most recent...... and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & Aït-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling...

  19. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  20. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  1. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  2. An Approach for Solving Linear Fractional Programming Problems

    OpenAIRE

    Andrew Oyakhobo Odior

    2012-01-01

    Linear fractional programming problems are useful tools in production planning, financial and corporate planning, health care and hospital planning and as such have attracted considerable research interest. The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebr...

  3. An Improved Dynamic Programming Decomposition Approach for Network Revenue Management

    OpenAIRE

    Dan Zhang

    2011-01-01

    We consider a nonlinear nonseparable functional approximation to the value function of a dynamic programming formulation for the network revenue management (RM) problem with customer choice. We propose a simultaneous dynamic programming approach to solve the resulting problem, which is a nonlinear optimization problem with nonlinear constraints. We show that our approximation leads to a tighter upper bound on optimal expected revenue than some known bounds in the literature. Our approach can ...

  4. SNP based heritability estimation using a Bayesian approach

    DEFF Research Database (Denmark)

    Krag, Kristian; Janss, Luc; Mahdi Shariati, Mohammad

    2013-01-01

    . Differences in family structure were in general not found to influence the estimation of the heritability. For the sample sizes used in this study, a 10-fold increase of SNP density did not improve precision estimates compared with set-ups with a less dense distribution of SNPs. The methods used in this study...

  5. A cutting- plane approach for semi- infinite mathematical programming

    African Journals Online (AJOL)

    Many situations ranging from industrial to social via economic and environmental problems may be cast into a Semi-infinite mathematical program. In this paper, the cutting-plane approach which lends itself better for standard non-linear programs is exploited with good reasons for grappling with linear, convex and ...

  6. Estimating economic losses from earthquakes using an empirical approach

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.

    2013-01-01

    We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.

  7. Neurolinguistic Programming: A Systematic Approach to Change

    Science.gov (United States)

    Steinbach, A. M.

    1984-01-01

    Neurolinguistic programming (NLP) integrates advances in cybernetics, psychophysiology, linguistics, and information services. It has been used in business, education, law, medicine and psychotherapy to alter people's responses to stimuli, so they are better able to regulate their environment and themselves. There are five steps to an effective NLP interaction. They include 1. establishing rapport; the therapist must match his verbal and non-verbal behaviors to the patient's, 2. gathering information about the patient's present problem and goals by noting his verbal patterns and non-verbal responses, 3. considering the impact that achieving the patient's goals will have on him, his work, family and friends, and retaining any positive aspects of his current situation, 4. helping the patient achieve his goals by using specific techniques to alter his responses to various stimuli, and 5. ensuring the altered responses achieved in therapy are integrated into the patient's daily life. NLP has been used to help patients with medical problems ranging from purely psychological to complex organic ones. PMID:21283502

  8. Neurolinguistic programming: a systematic approach to change.

    Science.gov (United States)

    Steinbach, A M

    1984-01-01

    Neurolinguistic programming (NLP) integrates advances in cybernetics, psychophysiology, linguistics, and information services. It has been used in business, education, law, medicine and psychotherapy to alter people's responses to stimuli, so they are better able to regulate their environment and themselves. There are five steps to an effective NLP interaction. They include 1. establishing rapport; the therapist must match his verbal and non-verbal behaviors to the patient's, 2. gathering information about the patient's present problem and goals by noting his verbal patterns and non-verbal responses, 3. considering the impact that achieving the patient's goals will have on him, his work, family and friends, and retaining any positive aspects of his current situation, 4. helping the patient achieve his goals by using specific techniques to alter his responses to various stimuli, and 5. ensuring the altered responses achieved in therapy are integrated into the patient's daily life. NLP has been used to help patients with medical problems ranging from purely psychological to complex organic ones.

  9. Blood velocity estimation using ultrasound and spectral iterative adaptive approaches

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2011-01-01

    -mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30......This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B...

  10. A new approach for estimation of component failure rate

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Kljenak, I.

    1999-01-01

    In the paper, a formal method for component failure rate estimation is described, which is proposed to be used for components, for which no specific numerical data necessary for probabilistic estimation exist. The framework of the method is the Bayesian updating procedure. A prior distribution is selected from a generic database, whereas the likelihood distribution is assessed from specific data on component state using principles of fuzzy logic theory. With the proposed method the component failure rate estimation is based on a much larger quantity of information compared to presently used classical methods.(author)

  11. Appendix E: Wind Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  12. Appendix B: Hydrogen, Fuel Cells, and Infrastructure Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  13. Appendix G: Building Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  14. Appendix J: Weatherization and Intergovernmental Program (WIP) inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  15. Estimating the Population-Level Effectiveness of Vaccination Programs in the Netherlands.

    NARCIS (Netherlands)

    van Wijhe, Maarten; McDonald, Scott A; de Melker, Hester E; Postma, Maarten J; Wallinga, Jacco

    There are few estimates of the effectiveness of long-standing vaccination programs in developed countries. To fill this gap, we investigate the direct and indirect effectiveness of childhood vaccination programs on mortality at the population level in the Netherlands.

  16. Appendix F: FreedomCAR and Vehicle Technologies Program inputs for FY 2008 benefits estimates

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2009-01-18

    Document summarizes the results of the benefits analysis of EERE’s programs, as described in the FY 2008 Budget Request. EERE estimates benefits for its overall portfolio and nine Research, Development, Demonstration, and Deployment (RD3) programs.

  17. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    Science.gov (United States)

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  18. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and alternatives

    Science.gov (United States)

    William L. Thompson

    2003-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...

  19. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  20. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  1. Multifaceted Approach to Designing an Online Masters Program.

    Science.gov (United States)

    McNeil, Sara G.; Chernish, William N.; DeFranco, Agnes L.

    At the Conrad N. Hilton College of Hotel and Restaurant Management at the University of Houston (Texas), the faculty and administrators made a conscious effort to take a broad, extensive approach to designing and implementing a fully online masters program. This approach was entered in a comprehensive needs assessment model and sought input from…

  2. Mathematical-programming approaches to test item pool design

    NARCIS (Netherlands)

    Veldkamp, Bernard P.; van der Linden, Willem J.; Ariel, A.

    2002-01-01

    This paper presents an approach to item pool design that has the potential to improve on the quality of current item pools in educational and psychological testing andhence to increase both measurement precision and validity. The approach consists of the application of mathematical programming

  3. An approach for solving linear fractional programming problems ...

    African Journals Online (AJOL)

    The paper presents a new approach for solving a fractional linear programming problem in which the objective function is a linear fractional function, while the constraint functions are in the form of linear inequalities. The approach adopted is based mainly upon solving the problem algebraically using the concept of duality ...

  4. an approach to estimate total dissolved solids in groundwater using

    African Journals Online (AJOL)

    resistivities of the aquifer delineated were subsequently used to estimate TDS in groundwater which was correlated with those ... the concentrations of these chemical constituents in the ..... TDS determined by water analysis varied between 17.

  5. Quantum molecular dynamics approach to estimate spallation yield ...

    Indian Academy of Sciences (India)

    Consequently, the need for reliable data to design and construct spallation neutron sources has prompted ... A major disadvantage of the QMD code .... have estimated the average neutron multiplicities per primary reaction and kinetic energy.

  6. Beginning Java programming the object-oriented approach

    CERN Document Server

    Baesens, Bart; vanden Broucke, Seppe

    2015-01-01

    A comprehensive Java guide, with samples, exercises, case studies, and step-by-step instruction Beginning Java Programming: The Object Oriented Approach is a straightforward resource for getting started with one of the world's most enduringly popular programming languages. Based on classes taught by the authors, the book starts with the basics and gradually builds into more advanced concepts. The approach utilizes an integrated development environment that allows readers to immediately apply what they learn, and includes step-by-step instruction with plenty of sample programs. Each chapter c

  7. Efficient channel estimation in massive MIMO systems - a distributed approach

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2016-01-21

    We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. The algorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information among neighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.

  8. 78 FR 255 - Resumption of the Population Estimates Challenge Program

    Science.gov (United States)

    2013-01-03

    ... commenter would like the Census Bureau to continue to leave open the option for a challenging county-level... Challenging Certain Population and Income Estimates'' to ``Procedure for Challenging Population Estimates'' to... governmental unit. In those instances where a non-functioning county-level government or statistical equivalent...

  9. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.

    Science.gov (United States)

    Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter

    2013-12-06

    In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least

  10. A collaborative approach for estimating terrestrial wildlife abundance

    Science.gov (United States)

    Ransom, Jason I.; Kaczensky, Petra; Lubow, Bruce C.; Ganbaatar, Oyunsaikhan; Altansukh, Nanjid

    2012-01-01

    Accurately estimating abundance of wildlife is critical for establishing effective conservation and management strategies. Aerial methodologies for estimating abundance are common in developed countries, but they are often impractical for remote areas of developing countries where many of the world's endangered and threatened fauna exist. The alternative terrestrial methodologies can be constrained by limitations on access, technology, and human resources, and have rarely been comprehensively conducted for large terrestrial mammals at landscape scales. We attempted to overcome these problems by incorporating local peoples into a simultaneous point count of Asiatic wild ass (Equus hemionus) and goitered gazelle (Gazella subgutturosa) across the Great Gobi B Strictly Protected Area, Mongolia. Paired observers collected abundance and covariate metrics at 50 observation points and we estimated population sizes using distance sampling theory, but also assessed individual observer error to examine potential bias introduced by the large number of minimally trained observers. We estimated 5671 (95% CI = 3611–8907) wild asses and 5909 (95% CI = 3762–9279) gazelle inhabited the 11,027 km2 study area at the time of our survey and found that the methodology developed was robust at absorbing the logistical challenges and wide range of observer abilities. This initiative serves as a functional model for estimating terrestrial wildlife abundance while integrating local people into scientific and conservation projects. This, in turn, creates vested interest in conservation by the people who are most influential in, and most affected by, the outcomes.

  11. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    Science.gov (United States)

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  12. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  13. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    Science.gov (United States)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  14. An Iterative Adaptive Approach for Blood Velocity Estimation Using Ultrasound

    DEFF Research Database (Denmark)

    Gudmundson, Erik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2010-01-01

    This paper proposes a novel iterative data-adaptive spectral estimation technique for blood velocity estimation using medical ultrasound scanners. The technique makes no assumption on the sampling pattern of the slow-time or the fast-time samples, allowing for duplex mode transmissions where B......-mode images are interleaved with the Doppler emissions. Furthermore, the technique is shown, using both simplified and more realistic Field II simulations, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30% of the transmissions......, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed method also allows for more flexible transmission patterns, as well as exhibits fewer spectral artifacts as compared to earlier techniques....

  15. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  16. An observer-theoretic approach to estimating neutron flux distribution

    International Nuclear Information System (INIS)

    Park, Young Ho; Cho, Nam Zin

    1989-01-01

    State feedback control provides many advantages such as stabilization and improved transient response. However, when the state feedback control is considered for spatial control of a nuclear reactor, it requires complete knowledge of the distributions of the system state variables. This paper describes a method for estimating the flux spatial distribution using only limited flux measurements. It is based on the Luenberger observer in control theory, extended to the distributed parameter systems such as the space-time reactor dynamics equation. The results of the application of the method to simple reactor models showed that the flux distribution is estimated by the observer very efficiently using information from only a few sensors

  17. A Semantics-Based Approach to Construction Cost Estimating

    Science.gov (United States)

    Niknam, Mehrdad

    2015-01-01

    A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…

  18. Estimating raptor nesting success: old and new approaches

    Science.gov (United States)

    Brown, Jessi L.; Steenhof, Karen; Kochert, Michael N.; Bond, Laura

    2013-01-01

    Studies of nesting success can be valuable in assessing the status of raptor populations, but differing monitoring protocols can present unique challenges when comparing populations of different species across time or geographic areas. We used large datasets from long-term studies of 3 raptor species to compare estimates of apparent nest success (ANS, the ratio of successful to total number of nesting attempts), Mayfield nesting success, and the logistic-exposure model of nest survival. Golden eagles (Aquila chrysaetos), prairie falcons (Falco mexicanus), and American kestrels (F. sparverius) differ in their breeding biology and the methods often used to monitor their reproduction. Mayfield and logistic-exposure models generated similar estimates of nesting success with similar levels of precision. Apparent nest success overestimated nesting success and was particularly sensitive to inclusion of nesting attempts discovered late in the nesting season. Thus, the ANS estimator is inappropriate when exact point estimates are required, especially when most raptor pairs cannot be located before or soon after laying eggs. However, ANS may be sufficient to assess long-term trends of species in which nesting attempts are highly detectable.

  19. A Balanced Approach to Adaptive Probability Density Estimation

    Directory of Open Access Journals (Sweden)

    Julio A. Kovacs

    2017-04-01

    Full Text Available Our development of a Fast (Mutual Information Matching (FIM of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  20. Fracture mechanics approach to estimate rail wear limits

    Science.gov (United States)

    2009-10-01

    This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...

  1. Head Pose Estimation on Eyeglasses Using Line Detection and Classification Approach

    Science.gov (United States)

    Setthawong, Pisal; Vannija, Vajirasak

    This paper proposes a unique approach for head pose estimation of subjects with eyeglasses by using a combination of line detection and classification approaches. Head pose estimation is considered as an important non-verbal form of communication and could also be used in the area of Human-Computer Interface. A major improvement of the proposed approach is that it allows estimation of head poses at a high yaw/pitch angle when compared with existing geometric approaches, does not require expensive data preparation and training, and is generally fast when compared with other approaches.

  2. A stump-to-mill timber production cost-estimating program for cable logging eastern hardwoods

    Science.gov (United States)

    Chris B. LeDoux

    1987-01-01

    ECOST utilizes data from stand inventory, cruise data, and the logging plan for the tract in question. The program produces detailed stump-to-mill cost estimates for specific proposed timber sales. These estimates are then utilized, in combination with specific landowner objectives, to assess the economic feasibility of cable logging a given area. The program output is...

  3. The Program Module of Information Risk Numerical Estimation

    Directory of Open Access Journals (Sweden)

    E. S. Stepanova

    2011-03-01

    Full Text Available The algorithm of information risks analysis realized in the program module on the basis of threats matrixes and fuzzy cognitive maps describing potential threats on resources is offered in this paper.

  4. ESTIMATING FINANCIAL SUPPORT OF REGIONAL PROGRAMS OF SOCIAL ECONOMIC DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Iryna Kokhan

    2016-03-01

    Full Text Available The given article presents the analysis of the experience of the financial support of the regional programs of social economic development and the areas of usage of internal and external resources of the area. Dynamic and balanced development of regions is one of the most important issues for further establishment of marketing relations and social transformations in Ukraine. The Aim lies in the evaluation of financial support of the approved regional programs and launching the amount of their financing. The assessment of social economic situation in Ivano-Frankivsk region in terms of nationwide tendencies allows asserting that economic growth depends on the amounts and sources provided by the state. To determine close connection between  the amount of financing  for the programs  and  gross domestic product, the coefficient of correlation was calculated according to Pierson. It was proved that the amount of financing regional programs of social economic development influences the growth rate of gross domestic product. During research period the activation of regional authority institutions is being surveyed regarding the adoption and financing target regional programs. It was determined that the dynamic activity of the regional community and its territorial units on realization in terms of defined strategic priorities for programs of social economic development will facilitate disproportion reduction and differences in the development of territory units in the region, as well as positively influences the growth of gross domestic product providing steady increase of social welfare. Keywords: social economic regional development, ecology programs, social programs, gross regional domestic product, Pierson’s correlation coefficient. JEL: R 58

  5. The stochastic system approach for estimating dynamic treatments effect.

    Science.gov (United States)

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  6. A novel machine learning approach for estimation of electricity demand: An empirical evidence from Thailand

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Mostafavi, Seyyed Iman; Jaafari, Arefeh; Hosseinpour, Fariba

    2013-01-01

    Highlights: • A hybrid approach is presented for the estimation of the electricity demand. • The proposed method integrates the capabilities of GP and SA. • The GSA model makes accurate predictions of the electricity demand. - Abstract: This study proposes an innovative hybrid approach for the estimation of the long-term electricity demand. A new prediction equation was developed for the electricity demand using an integrated search method of genetic programming and simulated annealing, called GSA. The annual electricity demand was formulated in terms of population, gross domestic product (GDP), stock index, and total revenue from exporting industrial products of the same year. A comprehensive database containing total electricity demand in Thailand from 1986 to 2009 was used to develop the model. The generalization of the model was verified using a separate testing data. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the electricity demand. The GSA model provides accurate predictions of the electricity demand. Furthermore, the proposed model outperforms a regression and artificial neural network-based models

  7. A hybrid computational approach to estimate solar global radiation: An empirical evidence from Iran

    International Nuclear Information System (INIS)

    Mostafavi, Elham Sadat; Ramiyani, Sara Saeidi; Sarvar, Rahim; Moud, Hashem Izadi; Mousavi, Seyyed Mohammad

    2013-01-01

    This paper presents an innovative hybrid approach for the estimation of the solar global radiation. New prediction equations were developed for the global radiation using an integrated search method of genetic programming (GP) and simulated annealing (SA), called GP/SA. The solar radiation was formulated in terms of several climatological and meteorological parameters. Comprehensive databases containing monthly data collected for 6 years in two cities of Iran were used to develop GP/SA-based models. Separate models were established for each city. The generalization of the models was verified using a separate testing database. A sensitivity analysis was conducted to investigate the contribution of the parameters affecting the solar radiation. The derived models make accurate predictions of the solar global radiation and notably outperform the existing models. -- Highlights: ► A hybrid approach is presented for the estimation of the solar global radiation. ► The proposed method integrates the capabilities of GP and SA. ► Several climatological and meteorological parameters are included in the analysis. ► The GP/SA models make accurate predictions of the solar global radiation.

  8. Quadratic programming with fuzzy parameters: A membership function approach

    International Nuclear Information System (INIS)

    Liu, S.-T.

    2009-01-01

    Quadratic programming has been widely applied to solving real world problems. The conventional quadratic programming model requires the parameters to be known constants. In the real world, however, the parameters are seldom known exactly and have to be estimated. This paper discusses the fuzzy quadratic programming problems where the cost coefficients, constraint coefficients, and right-hand sides are represented by convex fuzzy numbers. Since the parameters in the program are fuzzy numbers, the derived objective value is a fuzzy number as well. Using Zadeh's extension principle, a pair of two-level mathematical programs is formulated to calculate the upper bound and lower bound of the objective values of the fuzzy quadratic program. Based on the duality theorem and by applying the variable transformation technique, the pair of two-level mathematical programs is transformed into a family of conventional one-level quadratic programs. Solving the pair of quadratic programs produces the fuzzy objective values of the problem. An example illustrates method proposed in this paper.

  9. New approaches to the estimation of erosion-corrosion

    International Nuclear Information System (INIS)

    Bakirov, Murat; Ereemin, Alexandr; Levchuck, Vasiliy; Chubarov, Sergey

    2006-09-01

    erosion-corrosion in a double-phase flow is that of moving deaerated liquid in directly contact with metal as a barrier between the metal and main steam-drop flow. Local processes of mass transfer, corrosion properties and water-chemical parameters of this film define intensity of erosion-corrosion and features of its behavior. Erosion-corrosion of metal in a double-phase flow is determined by the gas-dynamics of double-phase flaws, water chemistry, thermodynamic, materials science, etc. The goal of the work: development of theoretical and methodological basis of physical, chemical and mathematical models, as well as the finding of technical solutions and method of diagnostics, forecast and control of the erosion-corrosion processes. It will allow the increase of reliability and safety operation of the power equipment of the secondary circuit in NPP with WWER by use of monitoring of erosion-corrosion wear of pipelines. One concludes by stressing that the described design-experimental approach for solving of FAC problem will enable to carry out the following works: - elaboration and certification of the procedure of design-experimental substantiation of zones, aims and periodicity of the NPP elements operational inspection; - development and certification of a new Regulatory Document of stress calculation for definition of the minimum acceptable wall thickness levels considering real wear shape, FAC rates and inaccuracy of devices for wall thickness measurements; - improving the current Regulatory Documents and correcting of the Typical programs of operational inspection - optimization of zones, aims and periodicity of the inspection; - elaboration of recommendations for operational lifetime prolongation of the WWER secondary circuits elements by means of increasing of erosion-corrosion resistance of the new equipment and of the equipment, exceeding the design lifetime; - improving of safe and uninterrupted work of the power unit due to prediction of the most damaged

  10. A Structural VAR Approach to Estimating Budget Balance Targets

    OpenAIRE

    Robert A Buckle; Kunhong Kim; Julie Tam

    2001-01-01

    The Fiscal Responsibility Act 1994 states that, as a principle of responsible fiscal management, a New Zealand government should ensure total Crown debt is at a prudent level by ensuring total operating expenses do not exceed total operating revenues. In this paper a structural VAR model is estimated to evaluate the impact on the government's cash operating surplus (or budget balance) of four independent disturbances: supply, fiscal, real private demand, and nominal disturbances. Based on the...

  11. Estimating the elasticity of trade: the trade share approach

    OpenAIRE

    Mauro Lanati

    2013-01-01

    Recent theoretical work on international trade emphasizes the importance of trade elasticity as the fundamental statistic needed to conduct welfare analysis. Eaton and Kortum (2002) proposed a two-step method to estimate this parameter, where exporter fixed effects are regressed on proxies for technology and wages. Within the same Ricardian model of trade, the trade share provides an alternative source of identication for the elasticity of trade. Following Santos Silva and Tenreyro (2006) bot...

  12. A model-based approach to estimating forest area

    Science.gov (United States)

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  13. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    at filling that gap by presenting a new environmental flows assessment approach that explicitly links environmental flows to (socio)-economic values by focusing on ecosystem services. This Service Provision Index (SPI) approach is a novel contribution to the existing field of environmental flows assessment...... of sustaining ecosystems but also a matter of supporting humankind/livelihoods. One reason for the marginalisation of environmental flows is the lack of operational methods to demonstrate the inherently multi-disciplinary link between environmental flows, ecosystem services and economic value. This paper aims...

  14. SEISRISK II; a computer program for seismic hazard estimation

    Science.gov (United States)

    Bender, Bernice; Perkins, D.M.

    1982-01-01

    The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.

  15. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  16. Concurrent object-oriented programming: The MP-Eiffel approach

    OpenAIRE

    Silva, Miguel Augusto Mendes Oliveira e

    2004-01-01

    This article evaluates several possible approaches for integrating concurrency into object-oriented programming languages, presenting afterwards, a new language named MP-Eiffel. MP-Eiffel was designed attempting to include all the essential properties of both concurrent and object-oriented programming with simplicity and safety. A special care was taken to achieve the orthogonality of all the language mechanisms, allowing their joint use without unsafe side-effects (such as inh...

  17. Dynamic Programming Approaches for the Traveling Salesman Problem with Drone

    OpenAIRE

    Bouman, Paul; Agatz, Niels; Schmidt, Marie

    2017-01-01

    markdownabstractA promising new delivery model involves the use of a delivery truck that collaborates with a drone to make deliveries. Effectively combining a drone and a truck gives rise to a new planning problem that is known as the Traveling Salesman Problem with Drone (TSP-D). This paper presents an exact solution approach for the TSP-D based on dynamic programming and present experimental results of different dynamic programming based heuristics. Our numerical experiments show that our a...

  18. A tensor approach to the estimation of hydraulic conductivities in ...

    African Journals Online (AJOL)

    Based on the field measurements of the physical properties of fractured rocks, the anisotropic properties of hydraulic conductivity (HC) of the fractured rock aquifer can be assessed and presented using a tensor approach called hydraulic conductivity tensor. Three types of HC values, namely point value, axial value and flow ...

  19. Rethink, Reform, Reenter: An Entrepreneurial Approach to Prison Programming.

    Science.gov (United States)

    Keena, Linda; Simmons, Chris

    2015-07-01

    The purpose of this article was to present a description and first-stage evaluation of the impact of the Ice House Entrepreneurship Program on the learning experience of participating prerelease inmates at a Mississippi maximum-security prison and their perception of the transfer of skills learned in program into securing employment upon reentry. The Ice House Entrepreneurship Program is a 12-week program facilitated by volunteer university professors to inmates in a prerelease unit of a maximum-security prison in Mississippi. Participants' perspectives were examined through content analysis of inmates' answers to program Reflection and Response Assignments and interviews. The analyses were conducted according to the constant comparative method. Findings reveal the emergent of eight life-lessons and suggest that this is a promising approach to prison programming for prerelease inmates. This study discusses three approaches to better prepare inmates for a mindset change. The rethink, reform, and reenter approaches help break the traditional cycle of release, reoffend, and return. © The Author(s) 2014.

  20. Equivalence among three alternative approaches to estimating live tree carbon stocks in the eastern United States

    Science.gov (United States)

    Coeli M. Hoover; James E. Smith

    2017-01-01

    Assessments of forest carbon are available via multiple alternate tools or applications and are in use to address various regulatory and reporting requirements. The various approaches to making such estimates may or may not be entirely comparable. Knowing how the estimates produced by some commonly used approaches vary across forest types and regions allows users of...

  1. Demirjian approach of dental age estimation: Abridged for operator ease.

    Science.gov (United States)

    Jain, Vanshika; Kapoor, Priyanka; Miglani, Ragini

    2016-01-01

    Present times have seen an alarming increase in incidence of crimes by juveniles and of mass destruction that Highlight the preponderance of individual age estimation. Of the numerous techniques employed for age assessment, dental age estimation (DAE) and its correlation with chronological age (CA) have been of great significance in the recent past. Demirjian system, considered as gold standard in DAE is a simple and convenient method for DAE, though,, although, referring to multiple tables make it cumbersome and less eco friendly due to excessive paper load. The present study was aimed to develop a comprehensive chart (DAEcc) inclusive of all Demirjian tables and developmental stages of teeth and also to as well as to test the operator ease of 50 undergraduate dental students in performing DAE using this chart. The study was performed in two stages, wherein the first stage was aimed at formulation of the comprehensive chart (DAE CC ) which included pictorial representation of calcification stages, the Federation Dentaire Internationale notation of the teeth, and the corresponding scores for each stage with a concluding column at the end to enter the total score. The second stage assessed the applicability of the ease of DAE by DAE CC , whereby fifty 2 nd year BDS students were asked to trace the calcification stages of the seven permanent left mandibular teeth on a panorex, identify the correct stage, assign the corresponding score, and to calculate the total score for subsequent dental age assessment. showed that average time taken by the students for tracing seven mandibular teeth was 5 min and for assessment of dental age was 7 min. The total time taken for DAE was approximately 12 min, thus making the procedure less time consuming. Hence, this study proposes the use of DAEcc for age estimation due to ease in comprehension and execution of Demirjian system.

  2. A service and value based approach to estimating environmental flows

    DEFF Research Database (Denmark)

    Korsgaard, Louise; Jensen, R.A.; Jønch-Clausen, Torkil

    2008-01-01

    An important challenge of Integrated Water Resources Management (IWRM) is to balance water allocation between different users and uses. While economically and/or politically powerful users have relatively well developed methods for quantifying and justifying their water needs, this is not the case...... methodologies. The SPI approach is a pragmatic and transparent tool for incorporating ecosystems and environmental flows into the evaluation of water allocation scenarios, negotiations of trade-offs and decision-making in IWRM....

  3. Estimating radionuclide air concentrations near buildings: a screening approach

    International Nuclear Information System (INIS)

    Miller, C.W.; Yildiran, M.

    1984-01-01

    For some facilities that routinely release small amounts of radionuclides to the atmosphere, such as hospitals, research laboratories, contaminated clothing laundries, and others, it is necessary to estimate the dose to persons very near the buildings from which the releases occur. Such facilities need simple screening procedures which provide reasonable assurance that as long as the calculated dose is less than some fraction of a relevant dose limit no individual will receive a dose in excess of that limit. Screening procedures have been proposed for persons living within hundreds of meters to a few kilometers from a source of radioactive effluent. This paper examines a screening technique for estimating long-term average radionuclide air concentrations within approximately 100 m of a building from which the release occurs. The technique is based on a modified gaussion plume model (HB model) which considers the influence of the tallest building within 100 m and is independant of atmospheric stability and downwind distance. 4 references, 2 tables

  4. Feedback structure based entropy approach for multiple-model estimation

    Institute of Scientific and Technical Information of China (English)

    Shen-tu Han; Xue Anke; Guo Yunfei

    2013-01-01

    The variable-structure multiple-model (VSMM) approach, one of the multiple-model (MM) methods, is a popular and effective approach in handling problems with mode uncertainties. The model sequence set adaptation (MSA) is the key to design a better VSMM. However, MSA methods in the literature have big room to improve both theoretically and practically. To this end, we propose a feedback structure based entropy approach that could find the model sequence sets with the smallest size under certain conditions. The filtered data are fed back in real time and can be used by the minimum entropy (ME) based VSMM algorithms, i.e., MEVSMM. Firstly, the full Markov chains are used to achieve optimal solutions. Secondly, the myopic method together with particle filter (PF) and the challenge match algorithm are also used to achieve sub-optimal solutions, a trade-off between practicability and optimality. The numerical results show that the proposed algorithm provides not only refined model sets but also a good robustness margin and very high accuracy.

  5. Estimating Effective Subsidy Rates of Student Aid Programs

    OpenAIRE

    Stacey H. CHEN

    2008-01-01

    Every year millions of high school students and their parents in the US are asked to fill out complicated financial aid application forms. However, few studies have estimated the responsiveness of government financial aid schemes to changes in financial needs of the students. This paper identifies the effective subsidy rate (ESR) of student aid, as defined by the coefficient of financial needs in the regression of financial aid. The ESR measures the proportion of subsidy of student aid under ...

  6. New approach in the evaluation of a fitness program at a worksite.

    Science.gov (United States)

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  7. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  8. Groundwater flux estimation in streams: A thermal equilibrium approach

    Science.gov (United States)

    Zhou, Yan; Fox, Garey A.; Miller, Ron B.; Mollenhauer, Robert; Brewer, Shannon

    2018-06-01

    Stream and groundwater interactions play an essential role in regulating flow, temperature, and water quality for stream ecosystems. Temperature gradients have been used to quantify vertical water movement in the streambed since the 1960s, but advancements in thermal methods are still possible. Seepage runs are a method commonly used to quantify exchange rates through a series of streamflow measurements but can be labor and time intensive. The objective of this study was to develop and evaluate a thermal equilibrium method as a technique for quantifying groundwater flux using monitored stream water temperature at a single point and readily available hydrological and atmospheric data. Our primary assumption was that stream water temperature at the monitored point was at thermal equilibrium with the combination of all heat transfer processes, including mixing with groundwater. By expanding the monitored stream point into a hypothetical, horizontal one-dimensional thermal modeling domain, we were able to simulate the thermal equilibrium achieved with known atmospheric variables at the point and quantify unknown groundwater flux by calibrating the model to the resulting temperature signature. Stream water temperatures were monitored at single points at nine streams in the Ozark Highland ecoregion and five reaches of the Kiamichi River to estimate groundwater fluxes using the thermal equilibrium method. When validated by comparison with seepage runs performed at the same time and reach, estimates from the two methods agreed with each other with an R2 of 0.94, a root mean squared error (RMSE) of 0.08 (m/d) and a Nash-Sutcliffe efficiency (NSE) of 0.93. In conclusion, the thermal equilibrium method was a suitable technique for quantifying groundwater flux with minimal cost and simple field installation given that suitable atmospheric and hydrological data were readily available.

  9. Depandent samples in empirical estimation of stochastic programming problems

    Czech Academy of Sciences Publication Activity Database

    Kaňková, Vlasta; Houda, Michal

    2006-01-01

    Roč. 35, 2/3 (2006), s. 271-279 ISSN 1026-597X R&D Projects: GA ČR GA402/04/1294; GA ČR GD402/03/H057; GA ČR GA402/05/0115 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic programming * stability * probability metrics * Wasserstein metric * Kolmogorov metric * simulations Subject RIV: BB - Applied Statistics , Operational Research

  10. An alternative approach to spectrum base line estimation

    International Nuclear Information System (INIS)

    Bukvic, S.; Spasojevic, Dj.

    2005-01-01

    We present a new form of merit function which measures agreement between a large number of data and the model function with a particular choice of parameters. We demonstrate the efficiency of the proposed merit function on the common problem of finding the base line of a spectrum. When the base line is expected to be a horizontal straight line, the use of minimization algorithms is not necessary, i.e. the solution is achieved in a small number of steps. We discuss the advantages of the proposed merit function in general, when explicit use of a minimization algorithm is necessary. The hardcopy text is accompanied by an electronic archive, stored on the SAE homepage at http://www1.elsevier.com/homepage/saa/sab/content/lower.htm. The archive contains fully functional demo program with tutorial, examples and Visual Basic source code of the key subroutine

  11. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  12. Training Program Handbook: A systematic approach to training

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    This DOE handbook describes a systematic method for establishing and maintaining training programs that meet the requirements and expectations of DOE Orders 5480.18B and 5480.20. The systematic approach to training includes 5 phases: Analysis, design, development, implementation, and evaluation.

  13. Linear decomposition approach for a class of nonconvex programming problems.

    Science.gov (United States)

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  14. Novel Approaches to Cellular Transplantation from the US Space Program

    Science.gov (United States)

    Pellis, Neal R.; Homick, Jerry L. (Technical Monitor)

    1999-01-01

    Research in the treatment of type I diabetes is entering a new era that takes advantage of our knowledge in an ever increasing variety of scientific disciplines. Some may originate from very diverse sources, one of which is the Space Program at National Aeronautics and Space Administration (NASA). The Space Program contributes to diabetes-related research in several treatment modalities. As an ongoing effort for medical monitoring of personnel involved in space exploration activities NASA and the extramural scientific community investigate strategies for noninvasive estimation of blood glucose levels. Part of the effort in the space protein crystal growth program is high-resolution structural analysis insulin as a means to better understand the interaction with its receptor and with host immune components and as a basis for rational design of a "better" insulin molecule. The Space Program is also developing laser technology for potential early cataract detection as well as a noninvasive analyses for addressing preclinical diabetic retinopathy. Finally, NASA developed an exciting cell culture system that affords some unique advantages in the propagation and maintenance of mammalian cells in vitro. The cell culture system was originally designed to maintain cell suspensions with a minimum of hydrodynamic and mechanical sheer while awaiting launch into microgravity. Currently the commercially available NASA bioreactor (Synthecon, Inc., Houston, TX) is used as a research tool in basic and applied cell biology. In recent years there is continued strong interest in cellular transplantation as treatment for type I diabetes. The advantages are the potential for successful long-term amelioration and a minimum risk for morbidity in the event of rejection of the transplanted cells. The pathway to successful application of this strategy is accompanied by several substantial hurdles: (1) isolation and propagation of a suitable uniform donor cell population; (2) management of

  15. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  16. Estimating pressurized water reactor decommissioning costs: A user's manual for the PWR Cost Estimating Computer Program (CECP) software

    International Nuclear Information System (INIS)

    Bierschbach, M.C.; Mencinsky, G.J.

    1993-10-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user's manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning

  17. New Approaches for Estimating Motor Vehicle Emissions in Megacities

    Science.gov (United States)

    Marr, L. C.; Thornhill, D. A.; Herndon, S. C.; Onasch, T. B.; Wood, E. C.; Kolb, C. E.; Knighton, W. B.; Mazzoleni, C.; Zavala, M. A.; Molina, L. T.

    2007-12-01

    The rapid proliferation of megacities and their air quality problems is producing unprecedented air pollution health risks and management challenges. Quantifying motor vehicle emissions in the developing world's megacities, where vehicle ownership is skyrocketing, is critical for evaluating the cities' impacts on the atmosphere at urban, regional, and global scales. The main goal of this research is to quantify gasoline- and diesel-powered motor vehicle emissions within the Mexico City Metropolitan Area (MCMA). We apply positive matrix factorization to fast measurements of gaseous and particulate pollutants made by the Aerodyne Mobile Laboratory as it drove throughout the MCMA in 2006. We consider carbon dioxide; carbon monoxide; volatile organic compounds including benzene and formaldehyde; nitrogen oxides; ammonia; fine particulate matter; particulate polycyclic aromatic hydrocarbons; and black carbon. Analysis of the video record confirms the apportionment of emissions to different engine types. From the derived source profiles, we calculate fuel-based fleet-average emission factors and then estimate the total motor vehicle emission inventory. The advantages of this method are that it can capture a representative sample of vehicles in a variety of on-road driving conditions and can separate emissions from gasoline versus diesel engines. The results of this research can be used to help assess the accuracy of emission inventories and to guide the development of strategies for reducing vehicle emissions.

  18. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Science.gov (United States)

    Jones, Kelly W; Lewis, David J

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES)--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case illustrates that

  19. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques.

    Directory of Open Access Journals (Sweden)

    Kelly W Jones

    Full Text Available Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented--from protected areas to payments for ecosystem services (PES--to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing 'matching' to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods--an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1 matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2 fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators--due to the presence of unobservable bias--that lead to differences in conclusions about effectiveness. The Ecuador case

  20. Approaches to estimate body condition from slaughter records in reindeer

    Directory of Open Access Journals (Sweden)

    Anna Olofsson

    2008-12-01

    Full Text Available Long-term fluctuations in population densities of reindeer and caribou are common, where pasture is the limiting resource. Pasture quality affects the nutritional status and production of the animals. Therefore, continuous information about changes in the grazing resources is important when making management decisions. The objective of this study was to investigate different possibilities of using routine and additional slaughter records as body condition indicators, and thereby indicators of pasture resources in the summer ranges of reindeer husbandry. Records from 696 reindeer slaughtered in the winter 2002/2003 were included in the study. We developed a model with carcass weight as body condition indicator and two different models combining fatness, conformation, carcass weight, and body size as body condition indicators. The results showed age and sex dependent differences between the variables, and differentiation of animal age and sex improved the precision of models. Adjusting weight for body size also improved weight as a body condition indicator in adults. Conformation and fatness had good resemblance to weight and body size adjusted weight and should preferably be included, together with carcass weight and body size measures, when estimating body condition from carcasses. Our analysis showed that using non-invasive slaughter records is a good and non-expensive method of estimating body condition in reindeer. Abstract in Swedish / Sammandrag:Tillvägagångssätt för skattning avkroppskondition hos ren från slaktregistreringarFluktuationer i ren- och caribou-populationers täthet över tiden är vanliga då betet är en begränsad resurs och beteskvalitén påverkar djurens kondition och produktion. Kontinuerligt uppdaterad information om förändringar i betesresurserna är viktigt i samband med beslutsfattande om förvaltning avresurserna. Syftet med denna studie var att utvärdera olika möjliga sätt att anv

  1. Parameter Estimation of Structural Equation Modeling Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dewi Kurnia Sari

    2016-05-01

    Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.

  2. Estimation of irradiation temperature within the irradiation program Rheinsberg

    CERN Document Server

    Stephan, I; Prokert, F; Scholz, A

    2003-01-01

    The temperature monitoring within the irradiation programme Rheinsberg II was performed by diamond powder monitors. The method bases on the effect of temperature on the irradiation-induced increase of the diamond lattice constant. The method is described by a Russian code. In order to determine the irradiation temperature, the lattice constant is measured by means of a X-ray diffractometer after irradiation and subsequent isochronic annealing. The kink of the linearized temperature-lattice constant curves provides a value for the irradiation temperature. It has to be corrected according to the local neutron flux. The results of the lattice constant measurements show strong scatter. Furthermore there is a systematic error. The results of temperature monitoring by diamond powder are not satisfying. The most probable value lays within 255 C and 265 C and is near the value estimated from the thermal condition of the irradiation experiments.

  3. Joko Tingkir program for estimating tsunami potential rapidly

    Energy Technology Data Exchange (ETDEWEB)

    Madlazim,, E-mail: m-lazim@physics.its.ac.id; Hariyono, E., E-mail: m-lazim@physics.its.ac.id [Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Negeri Surabaya (UNESA) , Jl. Ketintang, Surabaya 60231 (Indonesia)

    2014-09-25

    The purpose of the study was to estimate P-wave rupture durations (T{sub dur}), dominant periods (T{sub d}) and exceeds duration (T{sub 50Ex}) simultaneously for local events, shallow earthquakes which occurred off the coast of Indonesia. Although the all earthquakes had parameters of magnitude more than 6,3 and depth less than 70 km, part of the earthquakes generated a tsunami while the other events (Mw=7.8) did not. Analysis using Joko Tingkir of the above stated parameters helped understand the tsunami generation of these earthquakes. Measurements from vertical component broadband P-wave quake velocity records and determination of the above stated parameters can provide a direct procedure for assessing rapidly the potential for tsunami generation. The results of the present study and the analysis of the seismic parameters helped explain why the events generated a tsunami, while the others did not.

  4. A state-and-transition simulation modeling approach for estimating the historical range of variability

    Directory of Open Access Journals (Sweden)

    Kori Blankenship

    2015-04-01

    Full Text Available Reference ecological conditions offer important context for land managers as they assess the condition of their landscapes and provide benchmarks for desired future conditions. State-and-transition simulation models (STSMs are commonly used to estimate reference conditions that can be used to evaluate current ecosystem conditions and to guide land management decisions and activities. The LANDFIRE program created more than 1,000 STSMs and used them to assess departure from a mean reference value for ecosystems in the United States. While the mean provides a useful benchmark, land managers and researchers are often interested in the range of variability around the mean. This range, frequently referred to as the historical range of variability (HRV, offers model users improved understanding of ecosystem function, more information with which to evaluate ecosystem change and potentially greater flexibility in management options. We developed a method for using LANDFIRE STSMs to estimate the HRV around the mean reference condition for each model state in ecosystems by varying the fire probabilities. The approach is flexible and can be adapted for use in a variety of ecosystems. HRV analysis can be combined with other information to help guide complex land management decisions.

  5. Estimating the Need for Palliative Radiation Therapy: A Benchmarking Approach

    Energy Technology Data Exchange (ETDEWEB)

    Mackillop, William J., E-mail: william.mackillop@krcc.on.ca [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada); Department of Public Health Sciences, Queen' s University, Kingston, Ontario (Canada); Department of Oncology, Queen' s University, Kingston, Ontario (Canada); Kong, Weidong [Cancer Care and Epidemiology, Queen' s Cancer Research Institute, Queen' s University, Kingston, Ontario (Canada)

    2016-01-01

    Purpose: Palliative radiation therapy (PRT) benefits many patients with incurable cancer, but the overall need for PRT is unknown. Our primary objective was to estimate the appropriate rate of use of PRT in Ontario. Methods and Materials: The Ontario Cancer Registry identified patients who died of cancer in Ontario between 2006 and 2010. Comprehensive RT records were linked to the registry. Multivariate analysis identified social and health system-related factors affecting the use of PRT, enabling us to define a benchmark population of patients with unimpeded access to PRT. The proportion of cases treated at any time (PRT{sub lifetime}), the proportion of cases treated in the last 2 years of life (PRT{sub 2y}), and number of courses of PRT per thousand cancer deaths were measured in the benchmark population. These benchmarks were standardized to the characteristics of the overall population, and province-wide PRT rates were then compared to benchmarks. Results: Cases diagnosed at hospitals with no RT on-site and residents of poorer communities and those who lived farther from an RT center, were significantly less likely than others to receive PRT. However, availability of RT at the diagnosing hospital was the dominant factor. Neither socioeconomic status nor distance from home to nearest RT center had a significant effect on the use of PRT in patients diagnosed at a hospital with RT facilities. The benchmark population therefore consisted of patients diagnosed at a hospital with RT facilities. The standardized benchmark for PRT{sub lifetime} was 33.9%, and the corresponding province-wide rate was 28.5%. The standardized benchmark for PRT{sub 2y} was 32.4%, and the corresponding province-wide rate was 27.0%. The standardized benchmark for the number of courses of PRT per thousand cancer deaths was 652, and the corresponding province-wide rate was 542. Conclusions: Approximately one-third of patients who die of cancer in Ontario need PRT, but many of them are never

  6. A computer program for the estimation of time of death

    DEFF Research Database (Denmark)

    Lynnerup, N

    1993-01-01

    In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant and that t......In the 1960s Marshall and Hoare presented a "Standard Cooling Curve" based on their mathematical analyses on the postmortem cooling of bodies. Although fairly accurate under standard conditions, the "curve" or formula is based on the assumption that the ambience temperature is constant...... cooling of bodies is presented. It is proposed that by having a computer program that solves the equation, giving the length of the cooling period in response to a certain rectal temperature, and which allows easy comparison of multiple solutions, the uncertainties related to ambience temperature...

  7. Estimating radiological consequences using the Java programming language

    International Nuclear Information System (INIS)

    Crawford, J.; Hayward, M.; Harris, F.; Domel, R.

    1998-01-01

    At the Australian Nuclear Science and Technology Organisation (ANSTO) a model is being developed to determine critical parameters affecting radioactive doses to humans following a release of radionuclides into the atmosphere. Java programming language was chosen because of the Graphical User Interface (GUI) capabilities and its portability across computer platforms, which were a requirement for the application, called RadCon. The mathematical models are applied over the 2D region, performing time varying calculations of dose to humans for each grid point, according to user selected options. The information combined includes: two dimensional time varying air and ground concentrations, transfer factors from soil to plant, plant to animal, plant to humans, plant interception factors to determine amount of radionuclide on plant surfaces, dosimetric data, such as dose conversion factors and user defined parameters, e.g. soil types, lifestyle, diet of animals and humans. Details of the software requirements, pathway parameters and implementation of RadCon are given

  8. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  9. A Machine Learning Approach to Estimate Riverbank Geotechnical Parameters from Sediment Particle Size Data

    Science.gov (United States)

    Iwashita, Fabio; Brooks, Andrew; Spencer, John; Borombovits, Daniel; Curwen, Graeme; Olley, Jon

    2015-04-01

    Assessing bank stability using geotechnical models traditionally involves the laborious collection of data on the bank and floodplain stratigraphy, as well as in-situ geotechnical data for each sedimentary unit within a river bank. The application of geotechnical bank stability models are limited to those sites where extensive field data has been collected, where their ability to provide predictions of bank erosion at the reach scale are limited without a very extensive and expensive field data collection program. Some challenges in the construction and application of riverbank erosion and hydraulic numerical models are their one-dimensionality, steady-state requirements, lack of calibration data, and nonuniqueness. Also, numerical models commonly can be too rigid with respect to detecting unexpected features like the onset of trends, non-linear relations, or patterns restricted to sub-samples of a data set. These shortcomings create the need for an alternate modelling approach capable of using available data. The application of the Self-Organizing Maps (SOM) approach is well-suited to the analysis of noisy, sparse, nonlinear, multidimensional, and scale-dependent data. It is a type of unsupervised artificial neural network with hybrid competitive-cooperative learning. In this work we present a method that uses a database of geotechnical data collected at over 100 sites throughout Queensland State, Australia, to develop a modelling approach that enables geotechnical parameters (soil effective cohesion, friction angle, soil erodibility and critical stress) to be derived from sediment particle size data (PSD). The model framework and predicted values were evaluated using two methods, splitting the dataset into training and validation set, and through a Bootstrap approach. The basis of Bootstrap cross-validation is a leave-one-out strategy. This requires leaving one data value out of the training set while creating a new SOM to estimate that missing value based on the

  10. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  11. Model-Assisted Estimation of Tropical Forest Biomass Change: A Comparison of Approaches

    Directory of Open Access Journals (Sweden)

    Nikolai Knapp

    2018-05-01

    Full Text Available Monitoring of changes in forest biomass requires accurate transfer functions between remote sensing-derived changes in canopy height (ΔH and the actual changes in aboveground biomass (ΔAGB. Different approaches can be used to accomplish this task: direct approaches link ΔH directly to ΔAGB, while indirect approaches are based on deriving AGB stock estimates for two points in time and calculating the difference. In some studies, direct approaches led to more accurate estimations, while, in others, indirect approaches led to more accurate estimations. It is unknown how each approach performs under different conditions and over the full range of possible changes. Here, we used a forest model (FORMIND to generate a large dataset (>28,000 ha of natural and disturbed forest stands over time. Remote sensing of forest height was simulated on these stands to derive canopy height models for each time step. Three approaches for estimating ΔAGB were compared: (i the direct approach; (ii the indirect approach and (iii an enhanced direct approach (dir+tex, using ΔH in combination with canopy texture. Total prediction accuracies of the three approaches measured as root mean squared errors (RMSE were RMSEdirect = 18.7 t ha−1, RMSEindirect = 12.6 t ha−1 and RMSEdir+tex = 12.4 t ha−1. Further analyses revealed height-dependent biases in the ΔAGB estimates of the direct approach, which did not occur with the other approaches. Finally, the three approaches were applied on radar-derived (TanDEM-X canopy height changes on Barro Colorado Island (Panama. The study demonstrates the potential of forest modeling for improving the interpretation of changes observed in remote sensing data and for comparing different methodologies.

  12. Comparison between bottom-up and top-down approaches in the estimation of measurement uncertainty.

    Science.gov (United States)

    Lee, Jun Hyung; Choi, Jee-Hye; Youn, Jae Saeng; Cha, Young Joo; Song, Woonheung; Park, Ae Ja

    2015-06-01

    Measurement uncertainty is a metrological concept to quantify the variability of measurement results. There are two approaches to estimate measurement uncertainty. In this study, we sought to provide practical and detailed examples of the two approaches and compare the bottom-up and top-down approaches to estimating measurement uncertainty. We estimated measurement uncertainty of the concentration of glucose according to CLSI EP29-A guideline. Two different approaches were used. First, we performed a bottom-up approach. We identified the sources of uncertainty and made an uncertainty budget and assessed the measurement functions. We determined the uncertainties of each element and combined them. Second, we performed a top-down approach using internal quality control (IQC) data for 6 months. Then, we estimated and corrected systematic bias using certified reference material of glucose (NIST SRM 965b). The expanded uncertainties at the low glucose concentration (5.57 mmol/L) by the bottom-up approach and top-down approaches were ±0.18 mmol/L and ±0.17 mmol/L, respectively (all k=2). Those at the high glucose concentration (12.77 mmol/L) by the bottom-up and top-down approaches were ±0.34 mmol/L and ±0.36 mmol/L, respectively (all k=2). We presented practical and detailed examples for estimating measurement uncertainty by the two approaches. The uncertainties by the bottom-up approach were quite similar to those by the top-down approach. Thus, we demonstrated that the two approaches were approximately equivalent and interchangeable and concluded that clinical laboratories could determine measurement uncertainty by the simpler top-down approach.

  13. Estimation of net greenhouse gas balance using crop- and soil-based approaches: Two case studies

    International Nuclear Information System (INIS)

    Huang, Jianxiong; Chen, Yuanquan; Sui, Peng; Gao, Wansheng

    2013-01-01

    The net greenhouse gas balance (NGHGB), estimated by combining direct and indirect greenhouse gas (GHG) emissions, can reveal whether an agricultural system is a sink or source of GHGs. Currently, two types of methods, referred to here as crop-based and soil-based approaches, are widely used to estimate the NGHGB of agricultural systems on annual and seasonal crop timescales. However, the two approaches may produce contradictory results, and few studies have tested which approach is more reliable. In this study, we examined the two approaches using experimental data from an intercropping trial with straw removal and a tillage trial with straw return. The results of the two approaches provided different views of the two trials. In the intercropping trial, NGHGB estimated by the crop-based approach indicated that monocultured maize (M) was a source of GHGs (− 1315 kg CO 2 −eq ha −1 ), whereas maize–soybean intercropping (MS) was a sink (107 kg CO 2 −eq ha −1 ). When estimated by the soil-based approach, both cropping systems were sources (− 3410 for M and − 2638 kg CO 2 −eq ha −1 for MS). In the tillage trial, mouldboard ploughing (MP) and rotary tillage (RT) mitigated GHG emissions by 22,451 and 21,500 kg CO 2 −eq ha −1 , respectively, as estimated by the crop-based approach. However, by the soil-based approach, both tillage methods were sources of GHGs: − 3533 for MP and − 2241 kg CO 2 −eq ha −1 for RT. The crop-based approach calculates a GHG sink on the basis of the returned crop biomass (and other organic matter input) and estimates considerably more GHG mitigation potential than that calculated from the variations in soil organic carbon storage by the soil-based approach. These results indicate that the crop-based approach estimates higher GHG mitigation benefits compared to the soil-based approach and may overestimate the potential of GHG mitigation in agricultural systems. - Highlights: • Net greenhouse gas balance (NGHGB) of

  14. Evaluating a physician leadership development program - a mixed methods approach.

    Science.gov (United States)

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  15. Estimating radiological consequences using the Java programming language

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, J.; Hayward, M. [Australian Nuclear Science and Technology Organisation (ANSTO), Lucas Heights, NSW (Australia). Information Management Div; Harris, F.; Domel, R. [Australian Nuclear Science and Technology Organisation (ANSTO), Lucas Heights, NSW (Australia). Safety Div.

    1998-12-31

    At the Australian Nuclear Science and Technology Organisation (ANSTO) a model is being developed to determine critical parameters affecting radioactive doses to humans following a release of radionuclides into the atmosphere. Java programming language was chosen because of the Graphical User Interface (GUI) capabilities and its portability across computer platforms, which were a requirement for the application, called RadCon. The mathematical models are applied over the 2D region, performing time varying calculations of dose to humans for each grid point, according to user selected options. The information combined includes: two dimensional time varying air and ground concentrations, transfer factors from soil to plant, plant to animal, plant to humans, plant interception factors to determine amount of radionuclide on plant surfaces, dosimetric data, such as dose conversion factors and user defined parameters, e.g. soil types, lifestyle, diet of animals and humans. Details of the software requirements, pathway parameters and implementation of RadCon are given 10 refs., 2 tabs., 4 figs.

  16. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  17. Counting the cost: estimating the economic benefit of pedophile treatment programs.

    Science.gov (United States)

    Shanahan, M; Donato, R

    2001-04-01

    The principal objective of this paper is to identify the economic costs and benefits of pedophile treatment programs incorporating both the tangible and intangible cost of sexual abuse to victims. Cost estimates of cognitive behavioral therapy programs in Australian prisons are compared against the tangible and intangible costs to victims of being sexually abused. Estimates are prepared that take into account a number of problematic issues. These include the range of possible recidivism rates for treatment programs; the uncertainty surrounding the number of child sexual molestation offences committed by recidivists; and the methodological problems associated with estimating the intangible costs of sexual abuse on victims. Despite the variation in parameter estimates that impact on the cost-benefit analysis of pedophile treatment programs, it is found that potential range of economic costs from child sexual abuse are substantial and the economic benefits to be derived from appropriate and effective treatment programs are high. Based on a reasonable set of parameter estimates, in-prison, cognitive therapy treatment programs for pedophiles are likely to be of net benefit to society. Despite this, a critical area of future research must include further methodological developments in estimating the quantitative impact of child sexual abuse in the community.

  18. Estimating the Counterfactual Impact of Conservation Programs on Land Cover Outcomes: The Role of Matching and Panel Regression Techniques

    Science.gov (United States)

    Jones, Kelly W.; Lewis, David J.

    2015-01-01

    Deforestation and conversion of native habitats continues to be the leading driver of biodiversity and ecosystem service loss. A number of conservation policies and programs are implemented—from protected areas to payments for ecosystem services (PES)—to deter these losses. Currently, empirical evidence on whether these approaches stop or slow land cover change is lacking, but there is increasing interest in conducting rigorous, counterfactual impact evaluations, especially for many new conservation approaches, such as PES and REDD, which emphasize additionality. In addition, several new, globally available and free high-resolution remote sensing datasets have increased the ease of carrying out an impact evaluation on land cover change outcomes. While the number of conservation evaluations utilizing ‘matching’ to construct a valid control group is increasing, the majority of these studies use simple differences in means or linear cross-sectional regression to estimate the impact of the conservation program using this matched sample, with relatively few utilizing fixed effects panel methods—an alternative estimation method that relies on temporal variation in the data. In this paper we compare the advantages and limitations of (1) matching to construct the control group combined with differences in means and cross-sectional regression, which control for observable forms of bias in program evaluation, to (2) fixed effects panel methods, which control for observable and time-invariant unobservable forms of bias, with and without matching to create the control group. We then use these four approaches to estimate forest cover outcomes for two conservation programs: a PES program in Northeastern Ecuador and strict protected areas in European Russia. In the Russia case we find statistically significant differences across estimators—due to the presence of unobservable bias—that lead to differences in conclusions about effectiveness. The Ecuador case

  19. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  20. Program Management Approach to the Territorial Development of Small Business

    Directory of Open Access Journals (Sweden)

    Natalia Aleksandrovna Knysh

    2016-06-01

    Full Text Available This article presents the results of the research of the application on a state level of the program management approach to the territorial development of small business. Studying the main mechanism of the state policy implementation in the sphere of small business on a regional level, the authors have revealed the necessity to take into account the territorial specificity while the government programs of small business development are being formed. The analysis of the national practice of utilizing the program management mechanism in the regional system of the government support of small entrepreneurship was conducted on the example of Omsk region. The results of the analysis have shown the inefficiency of the current support system for small business and have determined the need to create an integrated model of territorial programming, which would not only contribute to the qualitative development of small business, but also provide the functioning efficiency of program management mechanism. As a result, the authors have created the two-level model of the programming of the territorial development of small business, which allows to satisfy purposefully the needs of entrepreneurship taking into account the specificity of the internal and external environment of the region. The first level of the model is methodological one and it is based on the marketing approach (the concepts of place marketing and relationship marketing to the operation of the program management mechanism. The second level of the model is methodical one. It offers the combination of the flexible methods of management of programming procedure (benchmarking, foresight, crowdsourcing and outsourcing. The given model raises the efficiency of the management decisions of the state structures in the sphere of small business. Therefore, it is interesting for the government authorities, which are responsible for the regional and municipal support programs of small business, as well

  1. Estimating BrAC from transdermal alcohol concentration data using the BrAC estimator software program.

    Science.gov (United States)

    Luczak, Susan E; Rosen, I Gary

    2014-08-01

    Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.

  2. The integrated approach to teaching programming in secondary school

    Directory of Open Access Journals (Sweden)

    Martynyuk A.A.

    2018-02-01

    Full Text Available the article considers an integrated approach to teaching programming with the use of technologies of computer modeling and 3D-graphics, allowing to improve the quality of education. It is shown that this method will allow you to systematize knowledge, improve the level of motivation through the inclusion of relevant technologies, to develop skills of project activities, to strengthen interdisciplinary connections, and promotes professional and personal self-determination of students of secondary school.

  3. Dynamic Programming Approach for Construction of Association Rule Systems

    KAUST Repository

    Alsolami, Fawaz

    2016-11-18

    In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.

  4. Dynamic Programming Approach for Construction of Association Rule Systems

    KAUST Repository

    Alsolami, Fawaz; Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2016-01-01

    In the paper, an application of dynamic programming approach for optimization of association rules from the point of view of knowledge representation is considered. The association rule set is optimized in two stages, first for minimum cardinality and then for minimum length of rules. Experimental results present cardinality of the set of association rules constructed for information system and lower bound on minimum possible cardinality of rule set based on the information obtained during algorithm work as well as obtained results for length.

  5. A Generalized Estimating Equations Approach to Model Heterogeneity and Time Dependence in Capture-Recapture Studies

    Directory of Open Access Journals (Sweden)

    Akanda Md. Abdus Salam

    2017-03-01

    Full Text Available Individual heterogeneity in capture probabilities and time dependence are fundamentally important for estimating the closed animal population parameters in capture-recapture studies. A generalized estimating equations (GEE approach accounts for linear correlation among capture-recapture occasions, and individual heterogeneity in capture probabilities in a closed population capture-recapture individual heterogeneity and time variation model. The estimated capture probabilities are used to estimate animal population parameters. Two real data sets are used for illustrative purposes. A simulation study is carried out to assess the performance of the GEE estimator. A Quasi-Likelihood Information Criterion (QIC is applied for the selection of the best fitting model. This approach performs well when the estimated population parameters depend on the individual heterogeneity and the nature of linear correlation among capture-recapture occasions.

  6. Practical approaches to implementing facility wide equipment strengthening programs

    International Nuclear Information System (INIS)

    Kincaid, R.H.; Smietana, E.A.

    1989-01-01

    Equipment strengthening programs typically focus on components required to ensure operability of safety related equipment or to prevent the release of toxic substances. Survival of non-safety related equipment may also be crucial to ensure rapid recovery and minimize business interruption losses. Implementing a strengthening program for non-safety related equipment can be difficult due to the large amounts of equipment involved and limited budget availability. EQE has successfully implemented comprehensive equipment strengthening programs for a number of California corporations. Many of the lessons learned from these projects are applicable to DOE facilities. These include techniques for prioritizing equipment and three general methodologies for anchoring equipment. Pros and cons of each anchorage approach are presented along with typical equipment strengthening costs

  7. Closing the Education Gender Gap: Estimating the Impact of Girls' Scholarship Program in the Gambia

    Science.gov (United States)

    Gajigo, Ousman

    2016-01-01

    This paper estimates the impact of a school fee elimination program for female secondary students in The Gambia to reduce gender disparity in education. To assess the impact of the program, two nationally representative household surveys were used (1998 and 2002/2003). By 2002/2003, about half of the districts in the country had benefited from the…

  8. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  9. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    Science.gov (United States)

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  10. Parametric estimation in the wave buoy analogy - an elaborated approach based on energy considerations

    DEFF Research Database (Denmark)

    Montazeri, Najmeh; Nielsen, Ulrik Dam

    2014-01-01

    the ship’s wave-induced responses based on different statistical inferences including parametric and non-parametric approaches. This paper considers a concept to improve the estimate obtained by the parametric method for sea state estimation. The idea is illustrated by an analysis made on full-scale...

  11. A hybrid system approach to airspeed, angle of attack and sideslip estimation in Unmanned Aerial Vehicles

    KAUST Repository

    Shaqura, Mohammad; Claudel, Christian

    2015-01-01

    , low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Constraint Logic Programming approach to protein structure prediction

    Directory of Open Access Journals (Sweden)

    Fogolari Federico

    2004-11-01

    Full Text Available Abstract Background The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Results Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. Conclusions The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  14. Constraint Logic Programming approach to protein structure prediction.

    Science.gov (United States)

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  15. Evaluations of carbon fluxes estimated by top-down and bottom-up approaches

    Science.gov (United States)

    Murakami, K.; Sasai, T.; Kato, S.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.; Nasahara, K.; Matsunaga, T.

    2013-12-01

    There are two types of estimating carbon fluxes using satellite observation data, and these are referred to as top-down and bottom-up approaches. Many uncertainties are however still remain in these carbon flux estimations, because the true values of carbon flux are still unclear and estimations vary according to the type of the model (e.g. a transport model, a process based model) and input data. The CO2 fluxes in these approaches are estimated by using different satellite data such as the distribution of CO2 concentration in the top-down approach and the land cover information (e.g. leaf area, surface temperature) in the bottom-up approach. The satellite-based CO2 flux estimations with reduced uncertainty can be used efficiently for identifications of large emission area and carbon stocks of forest area. In this study, we evaluated the carbon flux estimates from two approaches by comparing with each other. The Greenhouse gases Observing SATellite (GOSAT) has been observing atmospheric CO2 concentrations since 2009. GOSAT L4A data product is the monthly CO2 flux estimations for 64 sub-continental regions and is estimated by using GOSAT FTS SWIR L2 XCO2 data and atmospheric tracer transport model. We used GOSAT L4A CO2 flux as top-down approach estimations and net ecosystem productions (NEP) estimated by the diagnostic type biosphere model BEAMS as bottom-up approach estimations. BEAMS NEP is only natural land CO2 flux, so we used GOSAT L4A CO2 flux after subtraction of anthropogenic CO2 emissions and oceanic CO2 flux. We compared with two approach in temperate north-east Asia region. This region is covered by grassland and crop land (about 60 %), forest (about 20 %) and bare ground (about 20 %). The temporal variation for one year period was indicated similar trends between two approaches. Furthermore we show the comparison of CO2 flux estimations in other sub-continental regions.

  16. On the implicit programming approach in a class of mathematical programs with equilibrium constraints

    Czech Academy of Sciences Publication Activity Database

    Outrata, Jiří; Červinka, Michal

    2009-01-01

    Roč. 38, 4B (2009), s. 1557-1574 ISSN 0324-8569 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10750506 Keywords : mathematical problem with equilibrium constraint * state constraints * implicit programming * calmness * exact penalization Subject RIV: BA - General Mathematics Impact factor: 0.378, year: 2009 http://library.utia.cas.cz/separaty/2010/MTR/outrata-on the implicit programming approach in a class of mathematical programs with equilibrium constraints.pdf

  17. Estimation dose in patients of nuclear medicine. Implementation of a calculi program and methodology

    International Nuclear Information System (INIS)

    Prieto, C.; Espana, M.L.; Tomasi, L.; Lopez Franco, P.

    1998-01-01

    Our hospital is developing a nuclear medicine quality assurance program in order to comply with medical exposure Directive 97/43 EURATOM and the legal requirements established in our legislation. This program includes the quality control of equipment and, in addition, the dose estimation in patients undergoing nuclear medicine examinations. This paper is focused in the second aspect, and presents a new computer program, developed in our Department, in order to estimate the absorbed dose in different organs and the effective dose to the patients, based upon the data from the ICRP publication 53 and its addendum. (Author) 16 refs

  18. Effects of Maternal Obesity on Fetal Programming: Molecular Approaches

    Science.gov (United States)

    Neri, Caterina; Edlow, Andrea G.

    2016-01-01

    Maternal obesity has become a worldwide epidemic. Obesity and a high-fat diet have been shown to have deleterious effects on fetal programming, predisposing offspring to adverse cardiometabolic and neurodevelopmental outcomes. Although large epidemiological studies have shown an association between maternal obesity and adverse outcomes for offspring, the underlying mechanisms remain unclear. Molecular approaches have played a key role in elucidating the mechanistic underpinnings of fetal malprogramming in the setting of maternal obesity. These approaches include, among others, characterization of epigenetic modifications, microRNA expression, the gut microbiome, the transcriptome, and evaluation of specific mRNA expression via quantitative reverse transcription polmerase chain reaction (RT-qPCR) in fetuses and offspring of obese females. This work will review the data from animal models and human fluids/cells regarding the effects of maternal obesity on fetal and offspring neurodevelopment and cardiometabolic outcomes, with a particular focus on molecular approaches. PMID:26337113

  19. Marketing the dental hygiene program. A public relations approach.

    Science.gov (United States)

    Nielsen, C

    1989-09-01

    Since 1980 there has been a decline in dental hygiene enrollment and graduates. Marketing dental hygiene programs, a recognized component of organizational survival, is necessary to meet societal demands for dental hygiene care now and in the future. The purpose of this article is to examine theories on the marketing of education and to describe a systematic approach to marketing dental hygiene education. Upon examination of these theories, the importance of analysis, planning, implementation, and evaluation/control of a marketing program is found to be essential. Application of the four p's of marketing--product/service, price, place, and promotion--is necessary to achieve marketing's goals and objectives and ultimately the program's mission and goals. Moreover, projecting a quality image of the dental hygiene program and the profession of dental hygiene must be included in the overall marketing plan. Results of an effective marketing plan should increase the number of quality students graduating from the dental hygiene program, ultimately contributing to the quality of oral health care in the community.

  20. The INEL approach: Environmental Restoration Program management and implementation methodology

    International Nuclear Information System (INIS)

    1996-01-01

    The overall objectives of the INEL Environmental Restoration (ER) Program management approach are to facilitate meeting mission needs through the successful implementation of a sound, and effective project management philosophy. This paper outlines the steps taken to develop the ER program, and explains further the implementing tools and processes used to achieve what can be viewed as fundamental to a successful program. The various examples provided will demonstrate how the strategies for implementing these operating philosophies are actually present and at work throughout the program, in spite of budget drills and organizational changes within DOE and the implementing contractor. A few of the challenges and successes of the INEL Environmental Restoration Program have included: a) completion of all enforceable milestones to date, b) acceleration of enforceable milestones, c) managing funds to reduce uncosted obligations at year end by utilizing greater than 99% of FY-95 budget, d) an exemplary safety record, e) developing a strategy for partial Delisting of the INEL by the year 2000, f) actively dealing with Natural Resource Damages Assessment issues, g) the achievement of significant project cost reductions, h) and implementation of a partnering charter and application of front end quality principles

  1. TETRA-COM: a comprehensive SPSS program for estimating the tetrachoric correlation.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2012-12-01

    We provide an SPSS program that implements descriptive and inferential procedures for estimating tetrachoric correlations. These procedures have two main purposes: (1) bivariate estimation in contingency tables and (2) constructing a correlation matrix to be used as input for factor analysis (in particular, the SPSS FACTOR procedure). In both cases, the program computes accurate point estimates, as well as standard errors and confidence intervals that are correct for any population value. For purpose (1), the program computes the contingency table together with five other measures of association. For purpose (2), the program checks the positive definiteness of the matrix, and if it is found not to be Gramian, performs a nonlinear smoothing procedure at the user's request. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  2. Technical Note: A comparison of two empirical approaches to estimate in-stream net nutrient uptake

    Science.gov (United States)

    von Schiller, D.; Bernal, S.; Martí, E.

    2011-04-01

    To establish the relevance of in-stream processes on nutrient export at catchment scale it is important to accurately estimate whole-reach net nutrient uptake rates that consider both uptake and release processes. Two empirical approaches have been used in the literature to estimate these rates: (a) the mass balance approach, which considers changes in ambient nutrient loads corrected by groundwater inputs between two stream locations separated by a certain distance, and (b) the spiralling approach, which is based on the patterns of longitudinal variation in ambient nutrient concentrations along a reach following the nutrient spiralling concept. In this study, we compared the estimates of in-stream net nutrient uptake rates of nitrate (NO3) and ammonium (NH4) and the associated uncertainty obtained with these two approaches at different ambient conditions using a data set of monthly samplings in two contrasting stream reaches during two hydrological years. Overall, the rates calculated with the mass balance approach tended to be higher than those calculated with the spiralling approach only at high ambient nitrogen (N) concentrations. Uncertainty associated with these estimates also differed between both approaches, especially for NH4 due to the general lack of significant longitudinal patterns in concentration. The advantages and disadvantages of each of the approaches are discussed.

  3. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Science.gov (United States)

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  4. Estimation of direction of arrival of a moving target using subspace based approaches

    Science.gov (United States)

    Ghosh, Ripul; Das, Utpal; Akula, Aparna; Kumar, Satish; Sardana, H. K.

    2016-05-01

    In this work, array processing techniques based on subspace decomposition of signal have been evaluated for estimation of direction of arrival of moving targets using acoustic signatures. Three subspace based approaches - Incoherent Wideband Multiple Signal Classification (IWM), Least Square-Estimation of Signal Parameters via Rotation Invariance Techniques (LS-ESPRIT) and Total Least Square- ESPIRIT (TLS-ESPRIT) are considered. Their performance is compared with conventional time delay estimation (TDE) approaches such as Generalized Cross Correlation (GCC) and Average Square Difference Function (ASDF). Performance evaluation has been conducted on experimentally generated data consisting of acoustic signatures of four different types of civilian vehicles moving in defined geometrical trajectories. Mean absolute error and standard deviation of the DOA estimates w.r.t. ground truth are used as performance evaluation metrics. Lower statistical values of mean error confirm the superiority of subspace based approaches over TDE based techniques. Amongst the compared methods, LS-ESPRIT indicated better performance.

  5. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  6. An evolutionary approach to real-time moment magnitude estimation via inversion of displacement spectra

    Science.gov (United States)

    Caprio, M.; Lancieri, M.; Cua, G. B.; Zollo, A.; Wiemer, S.

    2011-01-01

    We present an evolutionary approach for magnitude estimation for earthquake early warning based on real-time inversion of displacement spectra. The Spectrum Inversion (SI) method estimates magnitude and its uncertainty by inferring the shape of the entire displacement spectral curve based on the part of the spectra constrained by available data. The method consists of two components: 1) estimating seismic moment by finding the low frequency plateau Ω0, the corner frequency fc and attenuation factor (Q) that best fit the observed displacement spectra assuming a Brune ω2 model, and 2) estimating magnitude and its uncertainty based on the estimate of seismic moment. A novel characteristic of this method is that is does not rely on empirically derived relationships, but rather involves direct estimation of quantities related to the moment magnitude. SI magnitude and uncertainty estimates are updated each second following the initial P detection. We tested the SI approach on broadband and strong motion waveforms data from 158 Southern California events, and 25 Japanese events for a combined magnitude range of 3 ≤ M ≤ 7. Based on the performance evaluated on this dataset, the SI approach can potentially provide stable estimates of magnitude within 10 seconds from the initial earthquake detection.

  7. Mathematical solution of multilevel fractional programming problem with fuzzy goal programming approach

    Science.gov (United States)

    Lachhwani, Kailash; Poonia, Mahaveer Prasad

    2012-08-01

    In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.

  8. Best estimate LB LOCA approach based on advanced thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Sauvage, J.Y.; Gandrille, J.L.; Gaurrand, M.; Rochwerger, D.; Thibaudeau, J.; Viloteau, E.

    2004-01-01

    Improvements achieved in thermal-hydraulics with development of Best Estimate computer codes, have led number of Safety Authorities to preconize realistic analyses instead of conservative calculations. The potentiality of a Best Estimate approach for the analysis of LOCAs urged FRAMATOME to early enter into the development with CEA and EDF of the 2nd generation code CATHARE, then of a LBLOCA BE methodology with BWNT following the Code Scaling Applicability and Uncertainty (CSAU) proceeding. CATHARE and TRAC are the basic tools for LOCA studies which will be performed by FRAMATOME according to either a deterministic better estimate (dbe) methodology or a Statistical Best Estimate (SBE) methodology. (author)

  9. Portfolio optimization in enhanced index tracking with goal programming approach

    Science.gov (United States)

    Siew, Lam Weng; Jaaman, Saiful Hafizah Hj.; Ismail, Hamizun bin

    2014-09-01

    Enhanced index tracking is a popular form of passive fund management in stock market. Enhanced index tracking aims to generate excess return over the return achieved by the market index without purchasing all of the stocks that make up the index. This can be done by establishing an optimal portfolio to maximize the mean return and minimize the risk. The objective of this paper is to determine the portfolio composition and performance using goal programming approach in enhanced index tracking and comparing it to the market index. Goal programming is a branch of multi-objective optimization which can handle decision problems that involve two different goals in enhanced index tracking, a trade-off between maximizing the mean return and minimizing the risk. The results of this study show that the optimal portfolio with goal programming approach is able to outperform the Malaysia market index which is FTSE Bursa Malaysia Kuala Lumpur Composite Index because of higher mean return and lower risk without purchasing all the stocks in the market index.

  10. A Proposal for the Common Safety Approach of Space Programs

    Science.gov (United States)

    Grimard, Max

    2002-01-01

    For all applications, business and systems related to Space programs, Quality is mandatory and is a key factor for the technical as well as the economical performances. Up to now the differences of applications (launchers, manned space-flight, sciences, telecommunications, Earth observation, planetary exploration, etc.) and the difference of technical culture and background of the leading countries (USA, Russia, Europe) have generally led to different approaches in terms of standards and processes for Quality. At a time where international cooperation is quite usual for the institutional programs and globalization is the key word for the commercial business, it is considered of prime importance to aim at common standards and approaches for Quality in Space Programs. For that reason, the International Academy of Astronautics has set up a Study Group which mandate is to "Make recommendations to improve the Quality, Reliability, Efficiency, and Safety of space programmes, taking into account the overall environment in which they operate : economical constraints, harsh environments, space weather, long life, no maintenance, autonomy, international co-operation, norms and standards, certification." The paper will introduce the activities of this Study Group, describing a first list of topics which should be addressed : Through this paper it is expected to open the discussion to update/enlarge this list of topics and to call for contributors to this Study Group.

  11. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  12. A singular-value decomposition approach to X-ray spectral estimation from attenuation data

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1986-01-01

    A singular-value decomposition (SVD) approach is described for estimating the exposure-rate spectral distributions of X-rays from attenuation data measured withvarious filtrations. This estimation problem with noisy measurements is formulated as the problem of solving a system of linear equations with an ill-conditioned nature. The principle of the SVD approach is that a response matrix, representing the X-ray attenuation effect by filtrations at various energies, can be expanded into summation of inherent component matrices, and thereby the spectral distributions can be represented as a linear combination of some component curves. A criterion function is presented for choosing the components needed to form a reliable estimate. The feasibility of the proposed approach is studied in detail in a computer simulation using a hypothetical X-ray spectrum. The application results of the spectral distributions emitted from a therapeutic X-ray generator are shown. Finally some advantages of this approach are pointed out. (orig.)

  13. Toward a global space exploration program: A stepping stone approach

    Science.gov (United States)

    Ehrenfreund, Pascale; McKay, Chris; Rummel, John D.; Foing, Bernard H.; Neal, Clive R.; Masson-Zwaan, Tanja; Ansdell, Megan; Peter, Nicolas; Zarnecki, John; Mackwell, Steve; Perino, Maria Antionetta; Billings, Linda; Mankins, John; Race, Margaret

    2012-01-01

    In response to the growing importance of space exploration in future planning, the Committee on Space Research (COSPAR) Panel on Exploration (PEX) was chartered to provide independent scientific advice to support the development of exploration programs and to safeguard the potential scientific assets of solar system objects. In this report, PEX elaborates a stepwise approach to achieve a new level of space cooperation that can help develop world-wide capabilities in space science and exploration and support a transition that will lead to a global space exploration program. The proposed stepping stones are intended to transcend cross-cultural barriers, leading to the development of technical interfaces and shared legal frameworks and fostering coordination and cooperation on a broad front. Input for this report was drawn from expertise provided by COSPAR Associates within the international community and via the contacts they maintain in various scientific entities. The report provides a summary and synthesis of science roadmaps and recommendations for planetary exploration produced by many national and international working groups, aiming to encourage and exploit synergies among similar programs. While science and technology represent the core and, often, the drivers for space exploration, several other disciplines and their stakeholders (Earth science, space law, and others) should be more robustly interlinked and involved than they have been to date. The report argues that a shared vision is crucial to this linkage, and to providing a direction that enables new countries and stakeholders to join and engage in the overall space exploration effort. Building a basic space technology capacity within a wider range of countries, ensuring new actors in space act responsibly, and increasing public awareness and engagement are concrete steps that can provide a broader interest in space exploration, worldwide, and build a solid basis for program sustainability. By engaging

  14. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    Science.gov (United States)

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  15. BAESNUM, a conversational computer program for the Bayesian estimation of a parameter by a numerical method

    International Nuclear Information System (INIS)

    Colombo, A.G.; Jaarsma, R.J.

    1982-01-01

    This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described

  16. Estimating the return on investment in disease management programs using a pre-post analysis.

    Science.gov (United States)

    Fetterolf, Donald; Wennberg, David; Devries, Andrea

    2004-01-01

    Disease management programs have become increasingly popular over the past 5-10 years. Recent increases in overall medical costs have precipitated new concerns about the cost-effectiveness of medical management programs that have extended to the program directors for these programs. Initial success of the disease management movement is being challenged on the grounds that reported results have been the result of the application of faulty, if intuitive, methodologies. This paper discusses the use of "pre-post" methodology approaches in the analysis of disease management programs, and areas where application of this approach can result in spurious results and incorrect financial outcome assessments. The paper includes a checklist of these items for use by operational staff working with the programs, and a comprehensive bibliography that addresses many of the issues discussed.

  17. A program approach for site safety at oil spills

    International Nuclear Information System (INIS)

    Whipple, F.L.; Glenn, S.P.; Ocken, J.J.; Ott, G.L.

    1993-01-01

    When OSHA developed the hazardous waste operations (Hazwoper) regulations (29 CFR 1910.120) members of the response community envisioned a separation of oil and open-quotes hazmatclose quotes response operations. Organizations that deal with oil spills have had difficulty applying Hazwoper regulations to oil spill operations. This hinders meaningful implementation of the standard for their personnel. We should approach oil spills with the same degree of caution that is applied to hazmat response. Training frequently does not address the safety of oil spill response operations. Site-specific safety and health plans often are neglected or omitted. Certain oils expose workers to carcinogens, as well as chronic and acute hazards. Significant physical hazards are most important. In responding to oil spills, the hazards must be addressed. It is the authors' contention that a need exists for safety program at oil spill sites. Gone are the days of labor pool hires cleaning up spills in jeans and sneakers. The key to meaningful programs for oil spills requires application of controls focused on relevant safety risks rather than minimal chemical exposure hazards. Working with concerned reviewers from other agencies and organizations, the authors have developed a general safety and health program for oil spill response. It is intended to serve as the basis for organizations to customize their own written safety and health program (required by OSHA). It also provides a separate generic site safety plan for emergency phase oil spill operations (check-list) and long term post-emergency phase operations

  18. An Integer Programming Approach to Solving Tantrix on Fixed Boards

    Directory of Open Access Journals (Sweden)

    Yushi Uno

    2012-03-01

    Full Text Available Tantrix (Tantrix R ⃝ is a registered trademark of Colour of Strategy Ltd. in New Zealand, and of TANTRIX JAPAN in Japan, respectively, under the license of M. McManaway, the inventor. is a puzzle to make a loop by connecting lines drawn on hexagonal tiles, and the objective of this research is to solve it by a computer. For this purpose, we first give a problem setting of solving Tantrix as making a loop on a given fixed board. We then formulate it as an integer program by describing the rules of Tantrix as its constraints, and solve it by a mathematical programming solver to have a solution. As a result, we establish a formulation that can solve Tantrix of moderate size, and even when the solutions are invalid only by elementary constraints, we achieved it by introducing additional constraints and re-solve it. By this approach we succeeded to solve Tantrix of size up to 60.

  19. Novel approaches to the estimation of intake and bioavailability of radiocaesium in ruminants grazing forested areas

    International Nuclear Information System (INIS)

    Mayes, R.W.; Lamb, C.S.; Beresford, N.A.

    1994-01-01

    It is difficult to measure transfer of radiocaesium to the tissues of forest ruminants because they can potentially ingest a wide range of plant types. Measurements on undomesticated forest ruminants incur further difficulties. Existing techniques of estimating radiocaesium intake are imprecise when applied to forest systems. New approaches to measure this parameter are discussed. Two methods of intake estimation are described and evaluated. In the first method, radiocaesium intake is estimated from the radiocaesium activity concentrations of plants, combined with estimates of dry-matter (DM) intake and plant species composition of the diet, using plant and orally-dosed hydrocarbons (n-alkanes) as markers. The second approach estimates the total radiocaesium intake of an animal from the rate of excretion of radiocaesium in the faeces and an assumed value for the apparent absorption coefficient. Estimates of radiocaesium intake, using these approaches, in lactating goats and adult sheep were used to calculate transfer coefficients for milk and muscle; these compared favourably with transfer coefficients previously obtained under controlled experimental conditions. Potential variations in bioavailability of dietary radiocaesium sources to forest ruminants have rarely been considered. Approaches that can be used to describe bioavailability, including the true absorption coefficient and in vitro extractability, are outlined

  20. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    David A. Swanson

    2012-07-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  1. Using cohort change ratios to estimate life expectancy in populations with negligible migration: A new approach

    Directory of Open Access Journals (Sweden)

    Lucky Tedrow

    2012-01-01

    Full Text Available Census survival methods are the oldest and most widely applicable methods of estimating adult mortality, and for populations with negligible migration they can provide excellent results. The reason for this ubiquity is threefold: (1 their data requirements are minimal in that only two successive age distributions are needed; (2 the two successive age distributions are usually easily obtained from census counts; and (3 the method is straightforward in that it requires neither a great deal of judgment nor “data-fitting” techniques to implement. This ubiquity is in contrast to other methods, which require more data, as well as judgment and, often, data fitting. In this short note, the new approach we demonstrate is that life expectancy at birth can be computed by using census survival rates in combination with an identity whereby the radix of a life table is equal to 1 (l0 = 1.00. We point out that our suggested method is less involved than the existing approach. We compare estimates using our approach against other estimates, and find it works reasonably well. As well as some nuances and cautions, we discuss the benefits of using this approach to estimate life expectancy, including the ability to develop estimates of average remaining life at any age. We believe that the technique is worthy of consideration for use in estimating life expectancy in populations that experience negligible migration.

  2. Using Balanced Scorecard (BSC) approach to improve ergonomics programs.

    Science.gov (United States)

    Fernandes, Marcelo Vicente Forestieri

    2012-01-01

    The purpose of this paper is to propose foundations for a theory of using the Balanced Scorecard (BSC) methodology to improve the strategic view of ergonomics inside the organizations. This approach may help to promote a better understanding of investing on an ergonomic program to obtain good results in quality and production, as well as health maintenance. It is explained the basics of balanced scorecard, and how ergonomists could use this to work with strategic enterprises demand. Implications of this viewpoint for the development of a new methodology for ergonomics strategy views are offered.

  3. Gas contract portfolio management: a stochastic programming approach

    International Nuclear Information System (INIS)

    Haurie, A.; Smeers, Y.; Zaccour, G.

    1991-01-01

    This paper deals with a stochastic programming model which complements long range market simulation models generating scenarios concerning the evolution of demand and prices for gas in different market segments. Agas company has to negociate contracts with lengths going from one to twenty years. This stochastic model is designed to assess the risk associated with committing the gas production capacity of the company to these market segments. Different approaches are presented to overcome the difficulties associated with the very large size of the resulting optimization problem

  4. Solutions to estimation problems for scalar hamilton-jacobi equations using linear programming

    KAUST Repository

    Claudel, Christian G.; Chamoin, Timothee; Bayen, Alexandre M.

    2014-01-01

    This brief presents new convex formulations for solving estimation problems in systems modeled by scalar Hamilton-Jacobi (HJ) equations. Using a semi-analytic formula, we show that the constraints resulting from a HJ equation are convex, and can be written as a set of linear inequalities. We use this fact to pose various (and seemingly unrelated) estimation problems related to traffic flow-engineering as a set of linear programs. In particular, we solve data assimilation and data reconciliation problems for estimating the state of a system when the model and measurement constraints are incompatible. We also solve traffic estimation problems, such as travel time estimation or density estimation. For all these problems, a numerical implementation is performed using experimental data from the Mobile Century experiment. In the context of reproducible research, the code and data used to compute the results presented in this brief have been posted online and are accessible to regenerate the results. © 2013 IEEE.

  5. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  6. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    Science.gov (United States)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  7. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  8. Approaches to Education and Training for Kenya's Nuclear Power Program

    International Nuclear Information System (INIS)

    Kalambuka, H.A.

    2014-01-01

    1. Review of status and development of E and T for the nuclear power program in Kenya; 2. Review of challenges in nuclear E and T, and the initiatives being undertaken to mitigate them: • Recommendations for strategic action; 3. State of nuclear skills in the context of key drivers of the global revival in nuclear energy; 4. Point of view: Education in Applied Nuclear and Radiation physics at Nairobi: • Its growth has helped identify the gaps, and relevant practical approaches for realizing the broad spectrum of technical capacity to conduct a national NPP; 5. Proposed approach to support the E and T infrastructure necessary to allow the country to plan, construct, operate, regulate, and safely and securely handle nuclear facilities sustainably; 6. Specified E and T initiatives in the context of the national industrial development strategy and nuclear energy policy and funding for the complete life cycle and technology localization. (author)

  9. Estimating productivity costs using the friction cost approach in practice: a systematic review.

    Science.gov (United States)

    Kigozi, Jesse; Jowett, Sue; Lewis, Martyn; Barton, Pelham; Coast, Joanna

    2016-01-01

    The choice of the most appropriate approach to valuing productivity loss has received much debate in the literature. The friction cost approach has been proposed as a more appropriate alternative to the human capital approach when valuing productivity loss, although its application remains limited. This study reviews application of the friction cost approach in health economic studies and examines how its use varies in practice across different country settings. A systematic review was performed to identify economic evaluation studies that have estimated productivity costs using the friction cost approach and published in English from 1996 to 2013. A standard template was developed and used to extract information from studies meeting the inclusion criteria. The search yielded 46 studies from 12 countries. Of these, 28 were from the Netherlands. Thirty-five studies reported the length of friction period used, with only 16 stating explicitly the source of the friction period. Nine studies reported the elasticity correction factor used. The reported friction cost approach methods used to derive productivity costs varied in quality across studies from different countries. Few health economic studies have estimated productivity costs using the friction cost approach. The estimation and reporting of productivity costs using this method appears to differ in quality by country. The review reveals gaps and lack of clarity in reporting of methods for friction cost evaluation. Generating reporting guidelines and country-specific parameters for the friction cost approach is recommended if increased application and accuracy of the method is to be realized.

  10. Designing programs to improve diets for maternal and child health: estimating costs and potential dietary impacts of nutrition-sensitive programs in Ethiopia, Nigeria, and India.

    Science.gov (United States)

    Masters, William A; Rosettie, Katherine L; Kranz, Sarah; Danaei, Goodarz; Webb, Patrick; Mozaffarian, Dariush

    2018-05-01

    Improving maternal and child nutrition in resource-poor settings requires effective use of limited resources, but priority-setting is constrained by limited information about program costs and impacts, especially for interventions designed to improve diet quality. This study utilized a mixed methods approach to identify, describe and estimate the potential costs and impacts on child dietary intake of 12 nutrition-sensitive programs in Ethiopia, Nigeria and India. These potential interventions included conditional livestock and cash transfers, media and education, complementary food processing and sales, household production and food pricing programs. Components and costs of each program were identified through a novel participatory process of expert regional consultation followed by validation and calibration from literature searches and comparison with actual budgets. Impacts on child diets were determined by estimating of the magnitude of economic mechanisms for dietary change, comprehensive reviews of evaluations and effectiveness for similar programs, and demographic data on each country. Across the 12 programs, total cost per child reached (net present value, purchasing power parity adjusted) ranged very widely: from 0.58 to 2650 USD/year among five programs in Ethiopia; 2.62 to 1919 USD/year among four programs in Nigeria; and 27 to 586 USD/year among three programs in India. When impacts were assessed, the largest dietary improvements were for iron and zinc intakes from a complementary food production program in Ethiopia (increases of 17.7 mg iron/child/day and 7.4 mg zinc/child/day), vitamin A intake from a household animal and horticulture production program in Nigeria (335 RAE/child/day), and animal protein intake from a complementary food processing program in Nigeria (20.0 g/child/day). These results add substantial value to the limited literature on the costs and dietary impacts of nutrition-sensitive interventions targeting children in resource

  11. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  12. Update to the Fissile Materials Disposition program SST/SGT transportation estimation

    International Nuclear Information System (INIS)

    John Didlake

    1999-01-01

    This report is an update to ''Fissile Materials Disposition Program SST/SGT Transportation Estimation,'' SAND98-8244, June 1998. The Department of Energy Office of Fissile Materials Disposition requested this update as a basis for providing the public with an updated estimation of the number of transportation loads, load miles, and costs associated with the preferred alternative in the Surplus Plutonium Disposition Final Environmental Impact Statement (EIS)

  13. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    Directory of Open Access Journals (Sweden)

    Vjekoslav Klarić

    2011-03-01

    Full Text Available This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the definitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the theory behind it, the MIMIC model is then applied to the Croatian economy. Considering the described characteristics of different methods, a previous estimate of the size of the non-observed economy in Croatia is chosen to provide benchmark values for the MIMIC model. Using those, the estimates of the size of non-observed economy in Croatia during the period 1998-2009 are obtained.

  14. A simple approach to estimate soil organic carbon and soil co/sub 2/ emission

    International Nuclear Information System (INIS)

    Abbas, F.

    2013-01-01

    SOC (Soil Organic Carbon) and soil CO/sub 2/ (Carbon Dioxide) emission are among the indicator of carbon sequestration and hence global climate change. Researchers in developed countries benefit from advance technologies to estimate C (Carbon) sequestration. However, access to the latest technologies has always been challenging in developing countries to conduct such estimates. This paper presents a simple and comprehensive approach for estimating SOC and soil CO/sub 2/ emission from arable- and forest soils. The approach includes various protocols that can be followed in laboratories of the research organizations or academic institutions equipped with basic research instruments and technology. The protocols involve soil sampling, sample analysis for selected properties, and the use of a worldwide tested Rothamsted carbon turnover model. With this approach, it is possible to quantify SOC and soil CO/sub 2/ emission over short- and long-term basis for global climate change assessment studies. (author)

  15. A Fuzzy Logic-Based Approach for Estimation of Dwelling Times of Panama Metro Stations

    Directory of Open Access Journals (Sweden)

    Aranzazu Berbey Alvarez

    2015-04-01

    Full Text Available Passenger flow modeling and station dwelling time estimation are significant elements for railway mass transit planning, but system operators usually have limited information to model the passenger flow. In this paper, an artificial-intelligence technique known as fuzzy logic is applied for the estimation of the elements of the origin-destination matrix and the dwelling time of stations in a railway transport system. The fuzzy inference engine used in the algorithm is based in the principle of maximum entropy. The approach considers passengers’ preferences to assign a level of congestion in each car of the train in function of the properties of the station platforms. This approach is implemented to estimate the passenger flow and dwelling times of the recently opened Line 1 of the Panama Metro. The dwelling times obtained from the simulation are compared to real measurements to validate the approach.

  16. Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes.

    Science.gov (United States)

    Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang

    2016-10-03

    Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.

  17. Top-down and bottom-up approaches for cost estimating new reactor designs

    International Nuclear Information System (INIS)

    Berbey, P.; Gautier, G.M.; Duflo, D.; Rouyer, J.L.

    2007-01-01

    For several years, Generation-4 designs will be 'pre-conceptual' for the less mature concepts and 'preliminary' for the more mature concepts. In this situation, appropriate data for some of the plant systems may be lacking to develop a bottom-up cost estimate. Therefore, a more global approach, the Top-Down Approach (TDA), is needed to help the designers and decision makers in comparing design options. It utilizes more or less simple models for cost estimating the different parts of a design. TDA cost estimating effort applies to a whole functional element whose cost is approached by similar estimations coming from existing data, ratios and models, for a given range of variation of parameters. Modeling is used when direct analogy is not possible. There are two types of models, global and specific ones. Global models are applied to cost modules related to Code Of Account. Exponential formulae such as Ci = Ai + (Bi x Pi n ) are used when there are cost data for comparable modules in nuclear or other industries. Specific cost models are developed for major specific components of the plant: - process equipment such as reactor vessel, steam generators or large heat exchangers. - buildings, with formulae estimating the construction cost from base cost of m3 of building volume. - systems, when unit costs, cost ratios and models are used, depending on the level of detail of the design. Bottom Up Approach (BUA), which is based on unit prices coming from similar equipment or from manufacturer consulting, is very valuable and gives better cost estimations than TDA when it can be applied, that is at a rather late stage of the design. Both approaches are complementary when some parts of the design are detailed enough to be estimated by BUA, and when BUA results are used to check TDA results and to improve TDA models. This methodology is applied to the HTR (High Temperature Reactor) concept and to an advanced PWR design

  18. Software documentation and user's manual for fish-impingement sampling design and estimation method computer programs

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-11-01

    This report contains a description of three computer programs that implement the theory of sampling designs and the methods for estimating fish-impingement at the cooling-water intakes of nuclear power plants as described in companion report ANL/ES-60. Complete FORTRAN listings of these programs, named SAMPLE, ESTIMA, and SIZECO, are given and augmented with examples of how they are used

  19. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  20. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    Science.gov (United States)

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  1. Seismic Safety Margins Research Program (Phase I). Project VII. Systems analysis specification of computational approach

    International Nuclear Information System (INIS)

    Wall, I.B.; Kaul, M.K.; Post, R.I.; Tagart, S.W. Jr.; Vinson, T.J.

    1979-02-01

    An initial specification is presented of a computation approach for a probabilistic risk assessment model for use in the Seismic Safety Margin Research Program. This model encompasses the whole seismic calculational chain from seismic input through soil-structure interaction, transfer functions to the probability of component failure, integration of these failures into a system model and thereby estimate the probability of a release of radioactive material to the environment. It is intended that the primary use of this model will be in sensitivity studies to assess the potential conservatism of different modeling elements in the chain and to provide guidance on priorities for research in seismic design of nuclear power plants

  2. Estimating the size of non-observed economy in Croatia using the MIMIC approach

    OpenAIRE

    Vjekoslav Klaric

    2011-01-01

    This paper gives a quick overview of the approaches that have been used in the research of shadow economy, starting with the defi nitions of the terms “shadow economy” and “non-observed economy”, with the accent on the ISTAT/Eurostat framework. Several methods for estimating the size of the shadow economy and the non-observed economy are then presented. The emphasis is placed on the MIMIC approach, one of the methods used to estimate the size of the nonobserved economy. After a glance at the ...

  3. A coherent structure approach for parameter estimation in Lagrangian Data Assimilation

    Science.gov (United States)

    Maclean, John; Santitissadeekorn, Naratip; Jones, Christopher K. R. T.

    2017-12-01

    We introduce a data assimilation method to estimate model parameters with observations of passive tracers by directly assimilating Lagrangian Coherent Structures. Our approach differs from the usual Lagrangian Data Assimilation approach, where parameters are estimated based on tracer trajectories. We employ the Approximate Bayesian Computation (ABC) framework to avoid computing the likelihood function of the coherent structure, which is usually unavailable. We solve the ABC by a Sequential Monte Carlo (SMC) method, and use Principal Component Analysis (PCA) to identify the coherent patterns from tracer trajectory data. Our new method shows remarkably improved results compared to the bootstrap particle filter when the physical model exhibits chaotic advection.

  4. Principal component approach in variance component estimation for international sire evaluation

    Directory of Open Access Journals (Sweden)

    Jakobsen Jette

    2011-05-01

    Full Text Available Abstract Background The dairy cattle breeding industry is a highly globalized business, which needs internationally comparable and reliable breeding values of sires. The international Bull Evaluation Service, Interbull, was established in 1983 to respond to this need. Currently, Interbull performs multiple-trait across country evaluations (MACE for several traits and breeds in dairy cattle and provides international breeding values to its member countries. Estimating parameters for MACE is challenging since the structure of datasets and conventional use of multiple-trait models easily result in over-parameterized genetic covariance matrices. The number of parameters to be estimated can be reduced by taking into account only the leading principal components of the traits considered. For MACE, this is readily implemented in a random regression model. Methods This article compares two principal component approaches to estimate variance components for MACE using real datasets. The methods tested were a REML approach that directly estimates the genetic principal components (direct PC and the so-called bottom-up REML approach (bottom-up PC, in which traits are sequentially added to the analysis and the statistically significant genetic principal components are retained. Furthermore, this article evaluates the utility of the bottom-up PC approach to determine the appropriate rank of the (covariance matrix. Results Our study demonstrates the usefulness of both approaches and shows that they can be applied to large multi-country models considering all concerned countries simultaneously. These strategies can thus replace the current practice of estimating the covariance components required through a series of analyses involving selected subsets of traits. Our results support the importance of using the appropriate rank in the genetic (covariance matrix. Using too low a rank resulted in biased parameter estimates, whereas too high a rank did not result in

  5. A non-traditional multinational approach to construction inspection program

    International Nuclear Information System (INIS)

    Ram, Srinivasan; Smith, M.E.; Walker, T.F.

    2007-01-01

    The next generation of nuclear plants would be fabricated, constructed and licensed in markedly different ways than the present light water reactors. Non-traditional commercial nuclear industry suppliers, shipyards in Usa and international fabricators, would be a source to supply major components and subsystems. The codes of construction may vary depending upon the prevailing codes and standards used by the respective supplier. Such codes and standards need to be reconciled with the applicable regulations (e.g., 10 CFR 52). A Construction Inspection Program is an integral part of the Quality Assurance Measures required during the Construction Phase of the power plant. In order to achieve the stated cost and schedule goals of the new build plants, a nontraditional multi-national approach would be required. In lieu of the traditional approach of individual utility inspecting the quality of fabrication and construction, a multi-utility team approach is a method that will be discussed. Likewise, a multinational cooperative licensing approach is suggested taking advantage of inspectors of the regulatory authority where the component would be built. The multi-national approach proposed here is based on the principle of forming teaming agreements between the utilities, vendors and the regulators. For instance, rather than sending Country A's inspectors all over the world, inspectors of the regulator in Country B where a particular component is being fabricated would in fact be performing the required inspections for Country A's regulator. Similarly teaming arrangements could be set up between utilities and vendors in different countries. The required oversight for the utility or the vendor could be performed by their counterparts in the country where a particular item is being fabricated

  6. H∞ Channel Estimation for DS-CDMA Systems: A Partial Difference Equation Approach

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2013-01-01

    Full Text Available In the communications literature, a number of different algorithms have been proposed for channel estimation problems with the statistics of the channel noise and observation noise exactly known. In practical systems, however, the channel parameters are often estimated using training sequences which lead to the statistics of the channel noise difficult to obtain. Moreover, the received signals are corrupted not only by the ambient noises but also by multiple-access interferences, so the statistics of observation noises is also difficult to obtain. In this paper, we will investigate the H∞ channel estimation problem for direct-sequence code-division multiple-access (DS-CDMA communication systems with time-varying multipath fading channels. The channel estimator is designed by applying a partial difference equation approach together with the innovation analysis theory. This method can give a sufficient and necessary condition for the existence of an H∞ channel estimator.

  7. Estimating construction and demolition debris generation using a materials flow analysis approach.

    Science.gov (United States)

    Cochran, K M; Townsend, T G

    2010-11-01

    The magnitude and composition of a region's construction and demolition (C&D) debris should be understood when developing rules, policies and strategies for managing this segment of the solid waste stream. In the US, several national estimates have been conducted using a weight-per-construction-area approximation; national estimates using alternative procedures such as those used for other segments of the solid waste stream have not been reported for C&D debris. This paper presents an evaluation of a materials flow analysis (MFA) approach for estimating C&D debris generation and composition for a large region (the US). The consumption of construction materials in the US and typical waste factors used for construction materials purchasing were used to estimate the mass of solid waste generated as a result of construction activities. Debris from demolition activities was predicted from various historical construction materials consumption data and estimates of average service lives of the materials. The MFA approach estimated that approximately 610-78 × 10(6)Mg of C&D debris was generated in 2002. This predicted mass exceeds previous estimates using other C&D debris predictive methodologies and reflects the large waste stream that exists. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Key Aspects of the Federal Direct Loan Program's Cost Estimates: Department of Education. Report to Congressional Requesters.

    Science.gov (United States)

    Calbom, Linda M.; Ashby, Cornelia M.

    Because of concerns about the Department of Education's reliance on estimates to project costs of the William D. Ford Federal Direct Loan Program (FDLP) and a lack of historical information on which to base those estimates, Congress asked the General Accounting Office (GAO) to review how the department develops its cost estimates for the program,…

  9. Budget estimates: Fiscal year 1994. Volume 3: Research and program management

    Science.gov (United States)

    1994-01-01

    The research and program management (R&PM) appropriation provides the salaries, other personnel and related costs, and travel support for NASA's civil service workforce. This FY 1994 budget funds costs associated with 23,623 full-time equivalent (FTE) work years. Budget estimates are provided for all NASA centers by categories such as space station and new technology investments, space flight programs, space science, life and microgravity sciences, advanced concepts and technology, center management and operations support, launch services, mission to planet earth, tracking and data programs, aeronautical research and technology, and safety, reliability, and quality assurance.

  10. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  11. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  12. Development of a package program for estimating ground level concentrations of radioactive gases

    International Nuclear Information System (INIS)

    Nilkamhang, W.

    1986-01-01

    A package program for estimating ground level concentration of radioactive gas from elevate release was develop for use on IBM P C microcomputer. The main program, GAMMA PLUME NT10, is based on the well known VALLEY MODEL which is a Fortran computer code intended for mainframe computers. Other two options were added, namely, calculation of radioactive gas ground level concentration in Ci/m 3 and dose equivalent rate in mren/hr. In addition, a menu program and editor program were developed to render the program easier to use since the option could be readily selected and the input data could be easily modified as required through the keyboard. The accuracy and reliability of the program is almost identical to the mainframe. Ground level concentration of radioactive radon gas due to ore program processing in the nuclear chemistry laboratory of the Department of Nuclear Technology was estimated. In processing radioactive ore at a rate of 2 kg/day, about 35 p Ci/s of radioactive gas was released from a 14 m stack. When meteorological data of Don Muang (average for 5 years 1978-1982) were used maximum ground level concentration and the dose equivalent rate were found to be 0.00094 p Ci/m 3 and 5.0 x 10 -10 mrem/hr respectively. The processing time required for the above problem was about 7 minutes for any case of source on IBM P C which was acceptable for a computer of this class

  13. Empirical estimates in stochastic programs with probability and second order stochastic dominance constraints

    Czech Academy of Sciences Publication Activity Database

    Omelchenko, Vadym; Kaňková, Vlasta

    2015-01-01

    Roč. 84, č. 2 (2015), s. 267-281 ISSN 0862-9544 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : Stochastic programming problems * empirical estimates * light and heavy tailed distributions * quantiles Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/E/omelchenko-0454495.pdf

  14. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  15. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  16. PROFIT-PC: a program for estimating maximum net revenue from multiproduct harvests in Appalachian hardwoods

    Science.gov (United States)

    Chris B. LeDoux; John E. Baumgras; R. Bryan Selbe

    1989-01-01

    PROFIT-PC is a menu driven, interactive PC (personal computer) program that estimates optimum product mix and maximum net harvesting revenue based on projected product yields and stump-to-mill timber harvesting costs. Required inputs include the number of trees/acre by species and 2 inches diameter at breast-height class, delivered product prices by species and product...

  17. A genetic programming approach for Burkholderia Pseudomallei diagnostic pattern discovery

    Science.gov (United States)

    Yang, Zheng Rong; Lertmemongkolchai, Ganjana; Tan, Gladys; Felgner, Philip L.; Titball, Richard

    2009-01-01

    Motivation: Finding diagnostic patterns for fighting diseases like Burkholderia pseudomallei using biomarkers involves two key issues. First, exhausting all subsets of testable biomarkers (antigens in this context) to find a best one is computationally infeasible. Therefore, a proper optimization approach like evolutionary computation should be investigated. Second, a properly selected function of the antigens as the diagnostic pattern which is commonly unknown is a key to the diagnostic accuracy and the diagnostic effectiveness in clinical use. Results: A conversion function is proposed to convert serum tests of antigens on patients to binary values based on which Boolean functions as the diagnostic patterns are developed. A genetic programming approach is designed for optimizing the diagnostic patterns in terms of their accuracy and effectiveness. During optimization, it is aimed to maximize the coverage (the rate of positive response to antigens) in the infected patients and minimize the coverage in the non-infected patients while maintaining the fewest number of testable antigens used in the Boolean functions as possible. The final coverage in the infected patients is 96.55% using 17 of 215 (7.4%) antigens with zero coverage in the non-infected patients. Among these 17 antigens, BPSL2697 is the most frequently selected one for the diagnosis of Burkholderia Pseudomallei. The approach has been evaluated using both the cross-validation and the Jack–knife simulation methods with the prediction accuracy as 93% and 92%, respectively. A novel approach is also proposed in this study to evaluate a model with binary data using ROC analysis. Contact: z.r.yang@ex.ac.uk PMID:19561021

  18. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  19. Balancing uncertainty of context in ERP project estimation: an approach and a case study

    NARCIS (Netherlands)

    Daneva, Maia

    2010-01-01

    The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -

  20. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research

    NARCIS (Netherlands)

    Golino, H.F.; Epskamp, S.

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use

  1. ONE OF APPROACHES TO THE ESTIMATION OF FIRMNESS OF TRAFFIC CONTROL SYSTEMS OF MOTOR TRANSPORT

    Directory of Open Access Journals (Sweden)

    D. Labenko

    2009-01-01

    Full Text Available The control system of locomotive objects and its description is considered. One of approaches concerning the basic index of control systems estimation – the probability of system’s functioning with the set quality in conditions of various influence on its elements is offered.

  2. A super-resolution approach for uncertainty estimation of PIV measurements

    NARCIS (Netherlands)

    Sciacchitano, A.; Wieneke, B.; Scarano, F.

    2012-01-01

    A super-resolution approach is proposed for the a posteriori uncertainty estimation of PIV measurements. The measured velocity field is employed to determine the displacement of individual particle images. A disparity set is built from the residual distance between paired particle images of

  3. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  4. Approach of the estimation for the highest energy of the gamma rays

    International Nuclear Information System (INIS)

    Dumitrescu, Gheorghe

    2004-01-01

    In the last decade there was under debate the issue concerning the composition of the ultra high energy cosmic rays and some authors suggested that the light composition seems to be a relating issue. There was another debate concerning the limit of the energy of gamma rays. The bottom-up approaches suggest a limit at 10 15 eV. Some top-down approaches rise this limit at about 10 20 eV or above. The present paper provides an approach to estimate the limit of the energy of gamma rays using the recent paper of Claus W. Turtur. (author)

  5. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  6. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  7. An approach to the estimation of the value of agricultural residues used as biofuels

    International Nuclear Information System (INIS)

    Kumar, A.; Purohit, P.; Rana, S.; Kandpal, T.C.

    2002-01-01

    A simple demand side approach for estimating the monetary value of agricultural residues used as biofuels is proposed. Some of the important issues involved in the use of biomass feedstocks in coal-fired boilers are briefly discussed along with their implications for the maximum acceptable price estimates for the agricultural residues. Results of some typical calculations are analysed along with the estimates obtained on the basis of a supply side approach (based on production cost) developed earlier. The prevailing market prices of some agricultural residues used as feedstocks for briquetting are also indicated. The results obtained can be used as preliminary indicators for identifying niche areas for immediate/short-term utilization of agriculture residues in boilers for process heating and power generation. (author)

  8. ANN Based Approach for Estimation of Construction Costs of Sports Fields

    Directory of Open Access Journals (Sweden)

    Michał Juszczyk

    2018-01-01

    Full Text Available Cost estimates are essential for the success of construction projects. Neural networks, as the tools of artificial intelligence, offer a significant potential in this field. Applying neural networks, however, requires respective studies due to the specifics of different kinds of facilities. This paper presents the proposal of an approach to the estimation of construction costs of sports fields which is based on neural networks. The general applicability of artificial neural networks in the formulated problem with cost estimation is investigated. An applicability of multilayer perceptron networks is confirmed by the results of the initial training of a set of various artificial neural networks. Moreover, one network was tailored for mapping a relationship between the total cost of construction works and the selected cost predictors which are characteristic of sports fields. Its prediction quality and accuracy were assessed positively. The research results legitimatize the proposed approach.

  9. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  10. A mathematical programming approach for sequential clustering of dynamic networks

    Science.gov (United States)

    Silva, Jonathan C.; Bennett, Laura; Papageorgiou, Lazaros G.; Tsoka, Sophia

    2016-02-01

    A common analysis performed on dynamic networks is community structure detection, a challenging problem that aims to track the temporal evolution of network modules. An emerging area in this field is evolutionary clustering, where the community structure of a network snapshot is identified by taking into account both its current state as well as previous time points. Based on this concept, we have developed a mixed integer non-linear programming (MINLP) model, SeqMod, that sequentially clusters each snapshot of a dynamic network. The modularity metric is used to determine the quality of community structure of the current snapshot and the historical cost is accounted for by optimising the number of node pairs co-clustered at the previous time point that remain so in the current snapshot partition. Our method is tested on social networks of interactions among high school students, college students and members of the Brazilian Congress. We show that, for an adequate parameter setting, our algorithm detects the classes that these students belong more accurately than partitioning each time step individually or by partitioning the aggregated snapshots. Our method also detects drastic discontinuities in interaction patterns across network snapshots. Finally, we present comparative results with similar community detection methods for time-dependent networks from the literature. Overall, we illustrate the applicability of mathematical programming as a flexible, adaptable and systematic approach for these community detection problems. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.

  11. Specific Cell (Re-)Programming: Approaches and Perspectives.

    Science.gov (United States)

    Hausburg, Frauke; Jung, Julia Jeannine; David, Robert

    2018-01-01

    Many disorders are manifested by dysfunction of key cell types or their disturbed integration in complex organs. Thereby, adult organ systems often bear restricted self-renewal potential and are incapable of achieving functional regeneration. This underlies the need for novel strategies in the field of cell (re-)programming-based regenerative medicine as well as for drug development in vitro. The regenerative field has been hampered by restricted availability of adult stem cells and the potentially hazardous features of pluripotent embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs). Moreover, ethical concerns and legal restrictions regarding the generation and use of ESCs still exist. The establishment of direct reprogramming protocols for various therapeutically valuable somatic cell types has overcome some of these limitations. Meanwhile, new perspectives for safe and efficient generation of different specified somatic cell types have emerged from numerous approaches relying on exogenous expression of lineage-specific transcription factors, coding and noncoding RNAs, and chemical compounds.It should be of highest priority to develop protocols for the production of mature and physiologically functional cells with properties ideally matching those of their endogenous counterparts. Their availability can bring together basic research, drug screening, safety testing, and ultimately clinical trials. Here, we highlight the remarkable successes in cellular (re-)programming, which have greatly advanced the field of regenerative medicine in recent years. In particular, we review recent progress on the generation of cardiomyocyte subtypes, with a focus on cardiac pacemaker cells. Graphical Abstract.

  12. Impacts of the proposed program approach on waste stream characteristics

    International Nuclear Information System (INIS)

    King, J.F.; Fleming, M.E.

    1995-01-01

    The evolution of the U.S. Department of Energy's Civilian Radioactive Waste Management System (CRWMS) over the past few years has led to significant changes in key system scenario assumption. This paper describes the effects of two recent changes on waste stream characteristics focusing primarily on repository impacts. First, the multi-purpose canister (MPC) concept has been included in the Program baseline. The change from a bare fuel system to one including an MPC-based system forces the fuel assemblies initially loaded together in MPCs to remain together throughout the system. Second, current system analyses also assume a system without a monitored retrievable storage (MRS), with the understanding that an MRS would be reincorporated if a site becomes available. Together these two changes have significant impacts on waste stream characteristics. Those two changes create a class of scenarios referred to generally as Program Approach (PA) scenarios. Scenarios based on the previously assumed system, bare fuel with an MRS, are referred to here as the Previous Reference (PR) system scenarios. The analysis compares scenarios with otherwise consistent assumptions and presents summary comparisons. The number of disposal containers and the waste heat output are determined for eight PA and PR scenarios

  13. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    Science.gov (United States)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  14. A type-driven approach to concrete meta programming.

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen)

    2005-01-01

    textabstractApplications that manipulate programs as data are called meta programs. Examples of meta programs are compilers, source-to-source translators and code generators. Meta programming can be supported by the ability to represent program fragments in concrete syntax instead of abstract

  15. A combined segmenting and non-segmenting approach to signal quality estimation for ambulatory photoplethysmography

    International Nuclear Information System (INIS)

    Wander, J D; Morris, D

    2014-01-01

    Continuous cardiac monitoring of healthy and unhealthy patients can help us understand the progression of heart disease and enable early treatment. Optical pulse sensing is an excellent candidate for continuous mobile monitoring of cardiovascular health indicators, but optical pulse signals are susceptible to corruption from a number of noise sources, including motion artifact. Therefore, before higher-level health indicators can be reliably computed, corrupted data must be separated from valid data. This is an especially difficult task in the presence of artifact caused by ambulation (e.g. walking or jogging), which shares significant spectral energy with the true pulsatile signal. In this manuscript, we present a machine-learning-based system for automated estimation of signal quality of optical pulse signals that performs well in the presence of periodic artifact. We hypothesized that signal processing methods that identified individual heart beats (segmenting approaches) would be more error-prone than methods that did not (non-segmenting approaches) when applied to data contaminated by periodic artifact. We further hypothesized that a fusion of segmenting and non-segmenting approaches would outperform either approach alone. Therefore, we developed a novel non-segmenting approach to signal quality estimation that we then utilized in combination with a traditional segmenting approach. Using this system we were able to robustly detect differences in signal quality as labeled by expert human raters (Pearson’s r = 0.9263). We then validated our original hypotheses by demonstrating that our non-segmenting approach outperformed the segmenting approach in the presence of contaminated signal, and that the combined system outperformed either individually. Lastly, as an example, we demonstrated the utility of our signal quality estimation system in evaluating the trustworthiness of heart rate measurements derived from optical pulse signals. (paper)

  16. Development of a Portfolio Management Approach with Case Study of the NASA Airspace Systems Program

    Science.gov (United States)

    Neitzke, Kurt W.; Hartman, Christopher L.

    2012-01-01

    A portfolio management approach was developed for the National Aeronautics and Space Administration s (NASA s) Airspace Systems Program (ASP). The purpose was to help inform ASP leadership regarding future investment decisions related to its existing portfolio of advanced technology concepts and capabilities (C/Cs) currently under development and to potentially identify new opportunities. The portfolio management approach is general in form and is extensible to other advanced technology development programs. It focuses on individual C/Cs and consists of three parts: 1) concept of operations (con-ops) development, 2) safety impact assessment, and 3) benefit-cost-risk (B-C-R) assessment. The first two parts are recommendations to ASP leaders and will be discussed only briefly, while the B-C-R part relates to the development of an assessment capability and will be discussed in greater detail. The B-C-R assessment capability enables estimation of the relative value of each C/C as compared with all other C/Cs in the ASP portfolio. Value is expressed in terms of a composite weighted utility function (WUF) rating, based on estimated benefits, costs, and risks. Benefit utility is estimated relative to achieving key NAS performance objectives, which are outlined in the ASP Strategic Plan.1 Risk utility focuses on C/C development and implementation risk, while cost utility focuses on the development and implementation portions of overall C/C life-cycle costs. Initial composite ratings of the ASP C/Cs were successfully generated; however, the limited availability of B-C-R information, which is used as inputs to the WUF model, reduced the meaningfulness of these initial investment ratings. Development of this approach, however, defined specific information-generation requirements for ASP C/C developers that will increase the meaningfulness of future B-C-R ratings.

  17. Unified approach for estimating the probabilistic design S-N curves of three commonly used fatigue stress-life models

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Wang Jinnuo; Gao Qing

    2001-01-01

    A unified approach, referred to as general maximum likelihood method, is presented for estimating probabilistic design S-N curves and their confidence bounds of the three commonly used fatigue stress-life models, namely three parameter, Langer and Basquin. The curves are described by a general form of mean and standard deviation S-N curves of the logarithm of fatigue life. Different from existent methods, i.e., the conventional method and the classical maximum likelihood method,present approach considers the statistical characteristics of whole test data. The parameters of the mean curve is firstly estimated by least square method and then, the parameters of the standard deviation curve is evaluated by mathematical programming method to be agreement with the maximum likelihood principle. Fit effects of the curves are assessed by fitted relation coefficient, total fitted standard error and the confidence bounds. Application to the virtual stress amplitude-crack initiation life data of a nuclear engineering material, Chinese 1Cr18Ni9Ti stainless steel pipe-weld metal, has indicated the validity of the approach to the S-N data where both S and N show the character of random variable. Practices to the two states of S-N data of Chinese 45 carbon steel notched specimens (k t = 2.0) have indicated the validity of present approach to the test results obtained respectively from group fatigue test and from maximum likelihood fatigue test. At the practices, it was revealed that in general the fit is best for the three-parameter model,slightly inferior for the Langer relation and poor for the Basquin equation. Relative to the existent methods, present approach has better fit. In addition, the possible non-conservative predictions of the existent methods, which are resulted from the influence of local statistical characteristics of the data, are also overcome by present approach

  18. Electric generating capacity planning: A nonlinear programming approach

    Energy Technology Data Exchange (ETDEWEB)

    Yakin, M.Z.; McFarland, J.W.

    1987-02-01

    This paper presents a nonlinear programming approach for long-range generating capacity expansion planning in electrical power systems. The objective in the model is the minimization of total cost consisting of investment cost plus generation cost for a multi-year planning horizon. Reliability constraints are imposed by using standard and practical reserve margin requirements. State equations representing the dynamic aspect of the problem are included. The electricity demand (load) and plant availabilities are treated as random variables, and the method of cumulants is used to calculate the expected energy generated by each plant in each year of the planning horizon. The resulting model has a (highly) nonlinear objective function and linear constraints. The planning model is solved over the multiyear planning horizon instead of decomposing it into one-year period problems. This approach helps the utility decision maker to carry out extensive sensitivity analysis easily. A case study example is provided using EPRI test data. Relationships among the reserve margin, total cost and surplus energy generating capacity over the planning horizon are explored by analyzing the model.

  19. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    Science.gov (United States)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  20. FRACTURE MECHANICS APPROACH TO ESTIMATE FATIGUE LIVES OF WELDED LAP-SHEAR SPECIMENS

    Energy Technology Data Exchange (ETDEWEB)

    Lam, P.; Michigan, J.

    2014-04-25

    A full range of stress intensity factor solutions for a kinked crack is developed as a function of weld width and the sheet thickness. When used with the associated main crack solutions (global stress intensity factors) in terms of the applied load and specimen geometry, the fatigue lives can be estimated for the laser-welded lap-shear specimens. The estimations are in good agreement with the experimental data. A classical solution for an infinitesimal kink is also employed in the approach. However, the life predictions tend to overestimate the actual fatigue lives. The traditional life estimations with the structural stress along with the experimental stress-fatigue life data (S-N curve) are also provided. In this case, the estimations only agree with the experimental data under higher load conditions.

  1. A brute-force spectral approach for wave estimation using measured vessel motions

    DEFF Research Database (Denmark)

    Nielsen, Ulrik D.; Brodtkorb, Astrid H.; Sørensen, Asgeir J.

    2018-01-01

    , and the procedure is simple in its mathematical formulation. The actual formulation is extending another recent work by including vessel advance speed and short-crested seas. Due to its simplicity, the procedure is computationally efficient, providing wave spectrum estimates in the order of a few seconds......The article introduces a spectral procedure for sea state estimation based on measurements of motion responses of a ship in a short-crested seaway. The procedure relies fundamentally on the wave buoy analogy, but the wave spectrum estimate is obtained in a direct - brute-force - approach......, and the estimation procedure will therefore be appealing to applications related to realtime, onboard control and decision support systems for safe and efficient marine operations. The procedure's performance is evaluated by use of numerical simulation of motion measurements, and it is shown that accurate wave...

  2. Lift/cruise fan V/STOL technology aircraft design definition study. Volume 3: Development program and budgetary estimates

    Science.gov (United States)

    Obrien, W. J.

    1976-01-01

    The aircraft development program, budgetary estimates in CY 1976 dollars, and cost reduction program variants are presented. Detailed cost matrices are also provided for the mechanical transmission system, turbotip transmission system, and the thrust vector hoods and yaw doors.

  3. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  4. Training programs for the systems approach to nuclear security

    International Nuclear Information System (INIS)

    Ellis, D.

    2005-01-01

    Full text: In support of United States Government (USG) and International Atomic Energy Agency (IAEA) nuclear security programs, Sandia National Laboratories (SNL) has advocated and practiced a risk-based, systematic approach to nuclear security. The risk equation has been developed and implemented as the basis for a performance-based methodology for the design and evaluation of physical protection systems against a design basis threat (DBT) for theft and sabotage of nuclear and/or radiological materials. Integrated systems must include technology, people, and the man-machine interface. A critical aspect of the human element is training on the systems-approach for all the stakeholders in nuclear security. Current training courses and workshops have been very beneficial but are still rather limited in scope. SNL has developed two primary international classes - the international training course on the physical protection of nuclear facilities and materials, and the design basis threat methodology workshop. SNL is also completing the development of three new courses that will be offered and presented in the near term. They are vital area identification methodology focused on nuclear power plants to aid in their protection against radiological sabotage, insider threat analysis methodology and protection schemes, and security foundations for competent authority and facility operator stakeholders who are not security professionals. In the long term, we envision a comprehensive nuclear security curriculum that spans policy and technology, regulators and operators, introductory and expert levels, classroom and laboratory/field, and local and offsite training options. This training curriculum will be developed in concert with a nuclear security series of guidance documents that is expected to be forthcoming from the IAEA. It is important to note that while appropriate implementation of systems based on such training and documentation can improve the risk reduction, such a

  5. Hankin and Reeves' approach to estimating fish abundance in small streams: limitations and potential options; TOPICAL

    International Nuclear Information System (INIS)

    Thompson, William L.

    2000-01-01

    Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were and gt;0.85 and constant across sampling units (95% confidence interval coverage= 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C(bar V)0.27[SE= 0.0004]) than when they did (C(bar V)= 0.19[SE= 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates

  6. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  7. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  8. Estimating data from figures with a Web-based program: Considerations for a systematic review.

    Science.gov (United States)

    Burda, Brittany U; O'Connor, Elizabeth A; Webber, Elizabeth M; Redmond, Nadia; Perdue, Leslie A

    2017-09-01

    Systematic reviewers often encounter incomplete or missing data, and the information desired may be difficult to obtain from a study author. Thus, systematic reviewers may have to resort to estimating data from figures with little or no raw data in a study's corresponding text or tables. We discuss a case study in which participants used a publically available Web-based program, called webplotdigitizer, to estimate data from 2 figures. We evaluated and used the intraclass coefficient and the accuracy of the estimates to the true data to inform considerations when using estimated data from figures in systematic reviews. The estimates for both figures were consistent, although the distribution of estimates in the figure of a continuous outcome was slightly higher. For the continuous outcome, the percent difference ranged from 0.23% to 30.35% while the percent difference of the event rate ranged from 0.22% to 8.92%. For both figures, the intraclass coefficient was excellent (>0.95). Systematic reviewers should consider and be transparent when estimating data from figures when the information cannot be obtained from study authors and perform sensitivity analyses of pooled results to reduce bias. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Conditional random slope: A new approach for estimating individual child growth velocity in epidemiological research.

    Science.gov (United States)

    Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E

    2017-09-10

    Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.

  10. Automatic Sky View Factor Estimation from Street View Photographs—A Big Data Approach

    Directory of Open Access Journals (Sweden)

    Jianming Liang

    2017-04-01

    Full Text Available Hemispherical (fisheye photography is a well-established approach for estimating the sky view factor (SVF. High-resolution urban models from LiDAR and oblique airborne photogrammetry can provide continuous SVF estimates over a large urban area, but such data are not always available and are difficult to acquire. Street view panoramas have become widely available in urban areas worldwide: Google Street View (GSV maintains a global network of panoramas excluding China and several other countries; Baidu Street View (BSV and Tencent Street View (TSV focus their panorama acquisition efforts within China, and have covered hundreds of cities therein. In this paper, we approach this issue from a big data perspective by presenting and validating a method for automatic estimation of SVF from massive amounts of street view photographs. Comparisons were made with SVF estimates derived from two independent sources: a LiDAR-based Digital Surface Model (DSM and an oblique airborne photogrammetry-based 3D city model (OAP3D, resulting in a correlation coefficient of 0.863 and 0.987, respectively. The comparisons demonstrated the capacity of the proposed method to provide reliable SVF estimates. Additionally, we present an application of the proposed method with about 12,000 GSV panoramas to characterize the spatial distribution of SVF over Manhattan Island in New York City. Although this is a proof-of-concept study, it has shown the potential of the proposed approach to assist urban climate and urban planning research. However, further development is needed before this approach can be finally delivered to the urban climate and urban planning communities for practical applications.

  11. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  12. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  13. RELAP5 simulation of surge line break accident using combined and best estimate plus uncertainty approaches

    International Nuclear Information System (INIS)

    Kristof, Marian; Kliment, Tomas; Petruzzi, Alessandro; Lipka, Jozef

    2009-01-01

    Licensing calculations in a majority of countries worldwide still rely on the application of combined approach using best estimate computer code without evaluation of the code models uncertainty and conservative assumptions on initial and boundary, availability of systems and components and additional conservative assumptions. However best estimate plus uncertainty (BEPU) approach representing the state-of-the-art in the area of safety analysis has a clear potential to replace currently used combined approach. There are several applications of BEPU approach in the area of licensing calculations, but some questions are discussed, namely from the regulatory point of view. In order to find a proper solution to these questions and to support the BEPU approach to become a standard approach for licensing calculations, a broad comparison of both approaches for various transients is necessary. Results of one of such comparisons on the example of the VVER-440/213 NPP pressurizer surge line break event are described in this paper. A Kv-scaled simulation based on PH4-SLB experiment from PMK-2 integral test facility applying its volume and power scaling factor is performed for qualitative assessment of the RELAP5 computer code calculation using the VVER-440/213 plant model. Existing hardware differences are identified and explained. The CIAU method is adopted for performing the uncertainty evaluation. Results using combined and BEPU approaches are in agreement with the experimental values in PMK-2 facility. Only minimal difference between combined and BEPU approached has been observed in the evaluation of the safety margins for the peak cladding temperature. Benefits of the CIAU uncertainty method are highlighted.

  14. Estimating a planetary magnetic field with time-dependent global MHD simulations using an adjoint approach

    Directory of Open Access Journals (Sweden)

    C. Nabert

    2017-05-01

    Full Text Available The interaction of the solar wind with a planetary magnetic field causes electrical currents that modify the magnetic field distribution around the planet. We present an approach to estimating the planetary magnetic field from in situ spacecraft data using a magnetohydrodynamic (MHD simulation approach. The method is developed with respect to the upcoming BepiColombo mission to planet Mercury aimed at determining the planet's magnetic field and its interior electrical conductivity distribution. In contrast to the widely used empirical models, global MHD simulations allow the calculation of the strongly time-dependent interaction process of the solar wind with the planet. As a first approach, we use a simple MHD simulation code that includes time-dependent solar wind and magnetic field parameters. The planetary parameters are estimated by minimizing the misfit of spacecraft data and simulation results with a gradient-based optimization. As the calculation of gradients with respect to many parameters is usually very time-consuming, we investigate the application of an adjoint MHD model. This adjoint MHD model is generated by an automatic differentiation tool to compute the gradients efficiently. The computational cost for determining the gradient with an adjoint approach is nearly independent of the number of parameters. Our method is validated by application to THEMIS (Time History of Events and Macroscale Interactions during Substorms magnetosheath data to estimate Earth's dipole moment.

  15. Aspects of using a best-estimate approach for VVER safety analysis in reactivity initiated accidents

    Energy Technology Data Exchange (ETDEWEB)

    Ovdiienko, Iurii; Bilodid, Yevgen; Ieremenko, Maksym [State Scientific and Technical Centre on Nuclear and Radiation, Safety (SSTC N and RS), Kyiv (Ukraine); Loetsch, Thomas [TUEV SUED Industrie Service GmbH, Energie und Systeme, Muenchen (Germany)

    2016-09-15

    At present time, Ukraine faces the problem of small margins of acceptance criteria in connection with the implementation of a conservative approach for safety evaluations. The problem is particularly topical conducting feasibility analysis of power up-rating for Ukrainian nuclear power plants. Such situation requires the implementation of a best-estimate approach on the basis of an uncertainty analysis. For some kind of accidents, such as loss-of-coolant accident (LOCA), the best estimate approach is, more or less, developed and established. However, for reactivity initiated accident (RIA) analysis an application of best estimate method could be problematical. A regulatory document in Ukraine defines a nomenclature of neutronics calculations and so called ''generic safety parameters'' which should be used as boundary conditions for all VVER-1000 (V-320) reactors in RIA analysis. In this paper the ideas of uncertainty evaluations of generic safety parameters in RIA analysis in connection with the use of the 3D neutron kinetic code DYN3D and the GRS SUSA approach are presented.

  16. Consolidated Fuel Reprocessing Program. Operating experience with pulsed-column holdup estimators

    International Nuclear Information System (INIS)

    Ehinger, M.H.

    1986-01-01

    Methods for estimating pulsed-column holdup are being investigated as part of the Safeguards Assessment task of the Consolidated Fuel Reprocessing Program (CFRP) at the Oak Ridge National Laboratory. The CFRP was a major sponsor of test runs at the Barnwell Nuclear Fuel plant (BNFP) in 1980 and 1981. During these tests, considerable measurement data were collected for pulsed columns in the plutonium purification portion of the plant. These data have been used to evaluate and compare three available methods of holdup estimation

  17. The role of efficiency estimates in regulatory price reviews: Ofgem's approach to benchmarking electricity networks

    International Nuclear Information System (INIS)

    Pollitt, Michael

    2005-01-01

    Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. (author)

  18. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    International Nuclear Information System (INIS)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji

    2010-01-01

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  19. Estimating petroleum products demand elasticities in Nigeria. A multivariate cointegration approach

    Energy Technology Data Exchange (ETDEWEB)

    Iwayemi, Akin; Adenikinju, Adeola; Babatunde, M. Adetunji [Department of Economics, University of Ibadan, Ibadan (Nigeria)

    2010-01-15

    This paper formulates and estimates petroleum products demand functions in Nigeria at both aggregative and product level for the period 1977 to 2006 using multivariate cointegration approach. The estimated short and long-run price and income elasticities confirm conventional wisdom that energy consumption responds positively to changes in GDP and negatively to changes in energy price. However, the price and income elasticities of demand varied according to product type. Kerosene and gasoline have relatively high short-run income and price elasticities compared to diesel. Overall, the results show petroleum products to be price and income inelastic. (author)

  20. Estimation and Comparison of Underground Economy in Croatia and European Union Countries: Fuzzy Logic Approach

    Directory of Open Access Journals (Sweden)

    Kristina Marsic

    2016-06-01

    The purpose of this paper is to address this issue in three ways. First, we review existing estimates of the size of the underground economy. Second, we apply a novel calculation method for estimation: fuzzy logic. Third, we calculated and compared underground economy index for 25 European Union countries and compared it, with special focus on Croatian underground economy index. Results indicated that Croatia has the thirteenth largest underground economy among measured members of the European Union. This study is the first of its kind with recent data to measure the size of underground economy in European Union countries by employing fuzzy logic approach.

  1. A New Approach to Programming Language Education for Beginners with Top-Down Learning

    Directory of Open Access Journals (Sweden)

    Daisuke Saito

    2013-12-01

    Full Text Available There are two basic approaches in learning new programming language: a bottom-up approach and a top-down approach. It has been said that if a learner has already acquired one language, the top-down approach is more efficient to learn another while, for a person who has absolutely no knowledge of any programming languages; the bottom-up approach is preferable. The major problem of the bottom-up approach is that it requires longer period to acquire the language. For quicker learning, this paper applies a top-down approach for a beginners who has not yet acquired any programming languages.

  2. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  3. The Expected Loss in the Discretization of Multistage Stochastic Programming Problems - Estimation and Convergence Rate

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2009-01-01

    Roč. 165, č. 1 (2009), s. 29-45 ISSN 0254-5330 R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : multistage stochastic programming problems * approximation * discretization * Monte Carlo Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.961, year: 2009 http://library.utia.cas.cz/separaty/2008/E/smid-the expected loss in the discretization of multistage stochastic programming problems - estimation and convergence rate.pdf

  4. Demonstrating a small utility approach to demand-side program implementation

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-01

    The US DOE awarded a grant to the Burlington Electric Department (B.E.D.) to test a demand-side management (DSM) demonstration program designed to quickly save a significant amount of power with little disruption to the utility's customers or its normal operations. B.E.D. is a small municipal utility located in northern Vermont, with a lengthy history of successful DSM involvement. In our grant application, we proposed to develop a replicable program and approach to DSM that might be useful to other small utilities and to write a report to enable such replication. We believe that this DSM program and/or individual program components are replicable. This report is designed to allow other utilities interested in DSM to replicate this program or specific program design features to meet their DSM goals. We also wanted to use the opportunity of this grant to test the waters of residential heating fuel-switching. We hoped to test the application of one fuel-switching technology, and to benefit from the lessons learned in developing a full-scale DSM program for this end- use. To this end the pilot effort has been very successful. In the pilot pressure we installed direct-vent gas fired space heaters sized as supplemental heating units in 44 residences heated solely by electric resistance heat. We installed the gas space heating units at no cost to the owners or residents. We surveyed participating customers. The results of those surveys are included in this report and preliminary estimates of winter peak capacity load reductions are also noted in this report.

  5. Demonstrating a small utility approach to demand-side program implementation

    International Nuclear Information System (INIS)

    1991-01-01

    The US DOE awarded a grant to the Burlington Electric Department (B.E.D.) to test a demand-side management (DSM) demonstration program designed to quickly save a significant amount of power with little disruption to the utility's customers or its normal operations. B.E.D. is a small municipal utility located in northern Vermont, with a lengthy history of successful DSM involvement. In our grant application, we proposed to develop a replicable program and approach to DSM that might be useful to other small utilities and to write a report to enable such replication. We believe that this DSM program and/or individual program components are replicable. This report is designed to allow other utilities interested in DSM to replicate this program or specific program design features to meet their DSM goals. We also wanted to use the opportunity of this grant to test the waters of residential heating fuel-switching. We hoped to test the application of one fuel-switching technology, and to benefit from the lessons learned in developing a full-scale DSM program for this end- use. To this end the pilot effort has been very successful. In the pilot pressure we installed direct-vent gas fired space heaters sized as supplemental heating units in 44 residences heated solely by electric resistance heat. We installed the gas space heating units at no cost to the owners or residents. We surveyed participating customers. The results of those surveys are included in this report and preliminary estimates of winter peak capacity load reductions are also noted in this report

  6. A fuzzy compromise programming approach for the Black-Litterman portfolio selection model

    Directory of Open Access Journals (Sweden)

    Mohsen Gharakhani

    2013-01-01

    Full Text Available In this paper, we examine advanced optimization approach for portfolio problem introduced by Black and Litterman to consider the shortcomings of Markowitz standard Mean-Variance optimization. Black and Litterman propose a new approach to estimate asset return. They present a way to incorporate the investor’s views into asset pricing process. Since the investor’s view about future asset return is always subjective and imprecise, we can represent it by using fuzzy numbers and the resulting model is multi-objective linear programming. Therefore, the proposed model is analyzed through fuzzy compromise programming approach using appropriate membership function. For this purpose, we introduce the fuzzy ideal solution concept based on investor preference and indifference relationships using canonical representation of proposed fuzzy numbers by means of their correspondingα-cuts. A real world numerical example is presented in which MSCI (Morgan Stanley Capital International Index is chosen as the target index. The results are reported for a portfolio consisting of the six national indices. The performance of the proposed models is compared using several financial criteria.

  7. A Bootstrap Approach to Computing Uncertainty in Inferred Oil and Gas Reserve Estimates

    International Nuclear Information System (INIS)

    Attanasi, Emil D.; Coburn, Timothy C.

    2004-01-01

    This study develops confidence intervals for estimates of inferred oil and gas reserves based on bootstrap procedures. Inferred reserves are expected additions to proved reserves in previously discovered conventional oil and gas fields. Estimates of inferred reserves accounted for 65% of the total oil and 34% of the total gas assessed in the U.S. Geological Survey's 1995 National Assessment of oil and gas in US onshore and State offshore areas. When the same computational methods used in the 1995 Assessment are applied to more recent data, the 80-year (from 1997 through 2076) inferred reserve estimates for pre-1997 discoveries located in the lower 48 onshore and state offshore areas amounted to a total of 39.7 billion barrels of oil (BBO) and 293 trillion cubic feet (TCF) of gas. The 90% confidence interval about the oil estimate derived from the bootstrap approach is 22.4 BBO to 69.5 BBO. The comparable 90% confidence interval for the inferred gas reserve estimate is 217 TCF to 413 TCF. The 90% confidence interval describes the uncertainty that should be attached to the estimates. It also provides a basis for developing scenarios to explore the implications for energy policy analysis

  8. Advanced Transportation System Studies. Technical Area 3: Alternate Propulsion Subsystems Concepts. Volume 3; Program Cost Estimates

    Science.gov (United States)

    Levack, Daniel J. H.

    2000-01-01

    The objective of this contract was to provide definition of alternate propulsion systems for both earth-to-orbit (ETO) and in-space vehicles (upper stages and space transfer vehicles). For such propulsion systems, technical data to describe performance, weight, dimensions, etc. was provided along with programmatic information such as cost, schedule, needed facilities, etc. Advanced technology and advanced development needs were determined and provided. This volume separately presents the various program cost estimates that were generated under three tasks: the F- IA Restart Task, the J-2S Restart Task, and the SSME Upper Stage Use Task. The conclusions, technical results , and the program cost estimates are described in more detail in Volume I - Executive Summary and in individual Final Task Reports.

  9. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  10. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    Cabedo, J.D.; Moya, I.

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach. developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification. (author)

  11. Estimating oil price 'Value at Risk' using the historical simulation approach

    International Nuclear Information System (INIS)

    David Cabedo, J.; Moya, Ismael

    2003-01-01

    In this paper we propose using Value at Risk (VaR) for oil price risk quantification. VaR provides an estimation for the maximum oil price change associated with a likelihood level, and can be used for designing risk management strategies. We analyse three VaR calculation methods: the historical simulation standard approach, the historical simulation with ARMA forecasts (HSAF) approach, developed in this paper, and the variance-covariance method based on autoregressive conditional heteroskedasticity models forecasts. The results obtained indicate that HSAF methodology provides a flexible VaR quantification, which fits the continuous oil price movements well and provides an efficient risk quantification

  12. Two Approaches for Estimating Discharge on Ungauged Basins in Oregon, USA

    Science.gov (United States)

    Wigington, P. J.; Leibowitz, S. G.; Comeleo, R. L.; Ebersole, J. L.; Copeland, E. A.

    2009-12-01

    Detailed information on the hydrologic behavior of streams is available for only a small proportion of all streams. Even in cases where discharge has been monitored, these measurements may not be available for a sufficiently long period to characterize the full behavior of a stream. In this presentation, we discuss two separate approaches for predicting discharge at ungauged locations. The first approach models discharge in the Calapooia Watershed, Oregon based on long-term US Geological Survey gauge stations located in two adjacent watersheds. Since late 2008, we have measured discharge and water level over a range of flow conditions at more than a dozen sites within the Calapooia. Initial results indicate that many of these sites, including the mainstem Calapooia and some of its tributaries, can be predicted by these outside gauge stations and simple landscape factors. This is not a true “ungauged” approach, since measurements are required to characterize the range of flow. However, the approach demonstrates how such measurements and more complete data from similar areas can be used to estimate a detailed record for a longer period. The second approach estimates 30 year average monthly discharge at ungauged locations based on a Hydrologic Landscape Region (HLR) model. We mapped HLR class over the entire state of Oregon using an assessment unit with an average size of 44 km2. We then calculated average statewide moisture surplus values for each HLR class, modified to account for snowpack accumulation and snowmelt. We calculated potential discharge by summing these values for each HLR within a watershed. The resulting monthly hydrograph is then transformed to estimate monthly discharge, based on aquifer and soil permeability and terrain. We hypothesize that these monthly values should provide good estimates of discharge in areas where imports from or exports to the deep groundwater system are not significant. We test the approach by comparing results with

  13. Deemed Savings Estimates for Legacy Air Conditioning and WaterHeating Direct Load Control Programs in PJM Region

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles

    2007-03-01

    During 2005 and 2006, the PJM Interconnection (PJM) Load Analysis Subcommittee (LAS) examined ways to reduce the costs and improve the effectiveness of its existing measurement and verification (M&V) protocols for Direct Load Control (DLC) programs. The current M&V protocol requires that a PURPA-compliant Load Research study be conducted every five years for each Load-Serving Entity (LSE). The current M&V protocol is expensive to implement and administer particularly for mature load control programs, some of which are marginally cost-effective. There was growing evidence that some LSEs were mothballing or dropping their DLC programs in lieu of incurring the expense associated with the M&V. This project had several objectives: (1) examine the potential for developing deemed savings estimates acceptable to PJM for legacy air conditioning and water heating DLC programs, and (2) explore the development of a collaborative, regional, consensus-based approach for conducting monitoring and verification of load reductions for emerging load management technologies for customers that do not have interval metering capability.

  14. A catalytic approach to estimate the redox potential of heme-peroxidases

    International Nuclear Information System (INIS)

    Ayala, Marcela; Roman, Rosa; Vazquez-Duhalt, Rafael

    2007-01-01

    The redox potential of heme-peroxidases varies according to a combination of structural components within the active site and its vicinities. For each peroxidase, this redox potential imposes a thermodynamic threshold to the range of oxidizable substrates. However, the instability of enzymatic intermediates during the catalytic cycle precludes the use of direct voltammetry to measure the redox potential of most peroxidases. Here we describe a novel approach to estimate the redox potential of peroxidases, which directly depends on the catalytic performance of the activated enzyme. Selected p-substituted phenols are used as substrates for the estimations. The results obtained with this catalytic approach correlate well with the oxidative capacity predicted by the redox potential of the Fe(III)/Fe(II) couple

  15. Estimating absolute configurational entropies of macromolecules: the minimally coupled subspace approach.

    Directory of Open Access Journals (Sweden)

    Ulf Hensen

    Full Text Available We develop a general minimally coupled subspace approach (MCSA to compute absolute entropies of macromolecules, such as proteins, from computer generated canonical ensembles. Our approach overcomes limitations of current estimates such as the quasi-harmonic approximation which neglects non-linear and higher-order correlations as well as multi-minima characteristics of protein energy landscapes. Here, Full Correlation Analysis, adaptive kernel density estimation, and mutual information expansions are combined and high accuracy is demonstrated for a number of test systems ranging from alkanes to a 14 residue peptide. We further computed the configurational entropy for the full 67-residue cofactor of the TATA box binding protein illustrating that MCSA yields improved results also for large macromolecular systems.

  16. Approaches in estimation of external cost for fuel cycles in the ExternE project

    International Nuclear Information System (INIS)

    Afanas'ev, A.A.; Maksimenko, B.N.

    1998-01-01

    The purposes, content and main results of studies realized within the frameworks of the International Project ExternE which is the first comprehensive attempt to develop general approach to estimation of external cost for different fuel cycles based on utilization of nuclear and fossil fuels, as well as on renewable power sources are discussed. The external cost of a fuel cycle is treated as social and environmental expenditures which are not taken into account by energy producers and consumers, i.e. these are expenditures not included into commercial cost nowadays. The conclusion on applicability of the approach suggested for estimation of population health hazards and environmental impacts connected with electric power generation growth (expressed in money or some other form) is made

  17. RiD: A New Approach to Estimate the Insolvency Risk

    Directory of Open Access Journals (Sweden)

    Marco Aurélio dos Santos Sanfins

    2014-10-01

    Full Text Available Given the recent international crises and the increasing number of defaults, several researchers have attempted to develop metrics that calculate the probability of insolvency with higher accuracy. The approaches commonly used, however, do not consider the credit risk nor the severity of the distance between receivables and obligations among different periods. In this paper we mathematically present an approach that allow us to estimate the insolvency risk by considering not only future receivables and obligations, but the severity of the distance between them and the quality of the respective receivables. Using Monte Carlo simulations and hypothetical examples, we show that our metric is able to estimate the insolvency risk with high accuracy. Moreover, our results suggest that in the absence of a smooth distribution between receivables and obligations, there is a non-null insolvency risk even when the present value of receivables is larger than the present value of the obligations.

  18. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  19. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    Science.gov (United States)

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  20. A novel approach based on preference-based index for interval bilevel linear programming problem.

    Science.gov (United States)

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  1. A novel approach based on preference-based index for interval bilevel linear programming problem

    Directory of Open Access Journals (Sweden)

    Aihong Ren

    2017-05-01

    Full Text Available Abstract This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation ⪯ m w $\\preceq_{mw}$ . Furthermore, the concept of a preference δ-optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  2. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    Science.gov (United States)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with

  3. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  4. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  5. Discovery of Boolean metabolic networks: integer linear programming based approach.

    Science.gov (United States)

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  6. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  7. Replacement model of city bus: A dynamic programming approach

    Science.gov (United States)

    Arifin, Dadang; Yusuf, Edhi

    2017-06-01

    This paper aims to develop a replacement model of city bus vehicles operated in Bandung City. This study is driven from real cases encountered by the Damri Company in the efforts to improve services to the public. The replacement model propounds two policy alternatives: First, to maintain or keep the vehicles, and second is to replace them with new ones taking into account operating costs, revenue, salvage value, and acquisition cost of a new vehicle. A deterministic dynamic programming approach is used to solve the model. The optimization process was heuristically executed using empirical data of Perum Damri. The output of the model is to determine the replacement schedule and the best policy if the vehicle has passed the economic life. Based on the results, the technical life of the bus is approximately 20 years old, while the economic life is an average of 9 (nine) years. It means that after the bus is operated for 9 (nine) years, managers should consider the policy of rejuvenation.

  8. 2003 status report savings estimates for the energy star(R)voluntary labeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla

    2004-11-09

    ENERGY STAR(R) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2002, what we expect in 2003, and provide savings forecasts for two market penetration scenarios for the period 2003 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  9. 2002 status report: Savings estimates for the ENERGY STAR(R) voluntary labeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan

    2003-03-03

    ENERGY STAR [registered trademark] is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2001, what we expect in 2002, and provide savings forecasts for two market penetration scenarios for the period 2002 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  10. Savings estimates for the ENERGY STAR (registered trademark) voluntary labeling program: 2001 status report

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; Mahajan, Akshay; Koomey, Jonathan G.

    2002-02-15

    ENERGY STAR(Registered Trademark) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2000, what we expect in 2001, and provide savings forecasts for two market penetration scenarios for the period 2001 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period.

  11. Savings estimates for the ENERGY STAR (registered trademark) voluntary labeling program: 2001 status report; TOPICAL

    International Nuclear Information System (INIS)

    Webber, Carrie A.; Brown, Richard E.; Mahajan, Akshay; Koomey, Jonathan G.

    2002-01-01

    ENERGY STAR(Registered Trademark) is a voluntary labeling program designed to identify and promote energy-efficient products, buildings and practices. Operated jointly by the Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE), ENERGY STAR labels exist for more than thirty products, spanning office equipment, residential heating and cooling equipment, commercial and residential lighting, home electronics, and major appliances. This report presents savings estimates for a subset of ENERGY STAR program activities, focused primarily on labeled products. We present estimates of the energy, dollar and carbon savings achieved by the program in the year 2000, what we expect in 2001, and provide savings forecasts for two market penetration scenarios for the period 2001 to 2020. The target market penetration forecast represents our best estimate of future ENERGY STAR savings. It is based on realistic market penetration goals for each of the products. We also provide a forecast under the assumption of 100 percent market penetration; that is, we assume that all purchasers buy ENERGY STAR-compliant products instead of standard efficiency products throughout the analysis period

  12. Sensitivity of Hurst parameter estimation to periodic signals in time series and filtering approaches

    Science.gov (United States)

    Marković, D.; Koch, M.

    2005-09-01

    The influence of the periodic signals in time series on the Hurst parameter estimate is investigated with temporal, spectral and time-scale methods. The Hurst parameter estimates of the simulated periodic time series with a white noise background show a high sensitivity on the signal to noise ratio and for some methods, also on the data length used. The analysis is then carried on to the investigation of extreme monthly river flows of the Elbe River (Dresden) and of the Rhine River (Kaub). Effects of removing the periodic components employing different filtering approaches are discussed and it is shown that such procedures are a prerequisite for an unbiased estimation of H. In summary, our results imply that the first step in a time series long-correlation study should be the separation of the deterministic components from the stochastic ones. Otherwise wrong conclusions concerning possible memory effects may be drawn.

  13. Postmortem interval estimation: a novel approach utilizing gas chromatography/mass spectrometry-based biochemical profiling.

    Science.gov (United States)

    Kaszynski, Richard H; Nishiumi, Shin; Azuma, Takeshi; Yoshida, Masaru; Kondo, Takeshi; Takahashi, Motonori; Asano, Migiwa; Ueno, Yasuhiro

    2016-05-01

    While the molecular mechanisms underlying postmortem change have been exhaustively investigated, the establishment of an objective and reliable means for estimating postmortem interval (PMI) remains an elusive feat. In the present study, we exploit low molecular weight metabolites to estimate postmortem interval in mice. After sacrifice, serum and muscle samples were procured from C57BL/6J mice (n = 52) at seven predetermined postmortem intervals (0, 1, 3, 6, 12, 24, and 48 h). After extraction and isolation, low molecular weight metabolites were measured via gas chromatography/mass spectrometry (GC/MS) and examined via semi-quantification studies. Then, PMI prediction models were generated for each of the 175 and 163 metabolites identified in muscle and serum, respectively, using a non-linear least squares curve fitting program. A PMI estimation panel for muscle and serum was then erected which consisted of 17 (9.7%) and 14 (8.5%) of the best PMI biomarkers identified in muscle and serum profiles demonstrating statistically significant correlations between metabolite quantity and PMI. Using a single-blinded assessment, we carried out validation studies on the PMI estimation panels. Mean ± standard deviation for accuracy of muscle and serum PMI prediction panels was -0.27 ± 2.88 and -0.89 ± 2.31 h, respectively. Ultimately, these studies elucidate the utility of metabolomic profiling in PMI estimation and pave the path toward biochemical profiling studies involving human samples.

  14. Photogrammetric Resection Approach Using Straight Line Features for Estimation of Cartosat-1 Platform Parameters

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2008-08-01

    Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. There are several other available methods using lines, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images are the main drawbacks of the classical approaches. The line based approach to overcome these problems includes usage of lines in the number of observations that can be provided, which improve significantly the overall system redundancy. This paper addresses mathematical model relating to both image and object reference system for solving the space resection problem which is generally used for upgrading the exterior orientation parameters. In order to solve the dynamic camera calibration parameters, a sequential estimator (Kalman Filtering is applied; in an iterative process to the image. For dynamic case, e.g. an image sequence of moving objects, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and basic physical sensor model for each instant of time. The proposed approach is tested with three real data sets and the result suggests that highly accurate space resection parameters can be obtained with or without using the control points and progressive processing time reduction.

  15. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    Science.gov (United States)

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  16. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    Science.gov (United States)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  17. A combined vision-inertial fusion approach for 6-DoF object pose estimation

    Science.gov (United States)

    Li, Juan; Bernardos, Ana M.; Tarrío, Paula; Casar, José R.

    2015-02-01

    The estimation of the 3D position and orientation of moving objects (`pose' estimation) is a critical process for many applications in robotics, computer vision or mobile services. Although major research efforts have been carried out to design accurate, fast and robust indoor pose estimation systems, it remains as an open challenge to provide a low-cost, easy to deploy and reliable solution. Addressing this issue, this paper describes a hybrid approach for 6 degrees of freedom (6-DoF) pose estimation that fuses acceleration data and stereo vision to overcome the respective weaknesses of single technology approaches. The system relies on COTS technologies (standard webcams, accelerometers) and printable colored markers. It uses a set of infrastructure cameras, located to have the object to be tracked visible most of the operation time; the target object has to include an embedded accelerometer and be tagged with a fiducial marker. This simple marker has been designed for easy detection and segmentation and it may be adapted to different service scenarios (in shape and colors). Experimental results show that the proposed system provides high accuracy, while satisfactorily dealing with the real-time constraints.

  18. A novel approach for estimating sugar and alcohol concentrations in wines using refractometer and hydrometer.

    Science.gov (United States)

    Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H

    2009-03-01

    To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.

  19. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    Directory of Open Access Journals (Sweden)

    AMAD HAMZA

    2016-10-01

    Full Text Available In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time.For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation. Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods

  20. A Novel Approach for Blind Estimation of Reverberation Time using Rayleigh Distribution Model

    International Nuclear Information System (INIS)

    Hamza, A.; Jan, T.; Ali, A.

    2016-01-01

    In this paper a blind estimation approach is proposed which directly utilizes the reverberant signal for estimating the RT (Reverberation Time). For estimation a very well-known method is used; MLE (Maximum Likelihood Estimation). Distribution of the decay rate is the core of the proposed method and can be achieved from the analysis of decay curve of the energy of the sound or from enclosure impulse response. In a pre-existing state of the art method Laplace distribution is used to model reverberation decay. The method proposed in this paper make use of the Rayleigh distribution and a spotting approach for modelling decay rate and identifying region of free decay in reverberant signal respectively. Motivation for the paper was deduced from the fact, when the reverberant speech RT falls in specific range then the signals decay rate impersonate Rayleigh distribution. On the basis of results of the experiments carried out for numerous reverberant signal it is clear that the performance and accuracy of the proposed method is better than other pre-existing methods. (author)

  1. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  2. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  3. Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.

    Science.gov (United States)

    Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin

    2017-08-10

    Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.

  4. Development of a matrix approach to estimate soil clean-up levels for BTEX compounds

    International Nuclear Information System (INIS)

    Erbas-White, I.; San Juan, C.

    1993-01-01

    A draft state-of-the-art matrix approach has been developed for the State of Washington to estimate clean-up levels for benzene, toluene, ethylbenzene and xylene (BTEX) in deep soils based on an endangerment approach to groundwater. Derived soil clean-up levels are estimated using a combination of two computer models, MULTIMED and VLEACH. The matrix uses a simple scoring system that is used to assign a score at a given site based on the parameters such as depth to groundwater, mean annual precipitation, type of soil, distance to potential groundwater receptor and the volume of contaminated soil. The total score is then used to obtain a soil clean-up level from a table. The general approach used involves the utilization of computer models to back-calculate soil contaminant levels in the vadose zone that would create that particular contaminant concentration in groundwater at a given receptor. This usually takes a few iterations of trial runs to estimate the clean-up levels since the models use the soil clean-up levels as ''input'' and the groundwater levels as ''output.'' The selected contaminant levels in groundwater are Model Toxic control Act (MTCA) values used in the State of Washington

  5. P3T+: A Performance Estimator for Distributed and Parallel Programs

    Directory of Open Access Journals (Sweden)

    T. Fahringer

    2000-01-01

    Full Text Available Developing distributed and parallel programs on today's multiprocessor architectures is still a challenging task. Particular distressing is the lack of effective performance tools that support the programmer in evaluating changes in code, problem and machine sizes, and target architectures. In this paper we introduce P3T+ which is a performance estimator for mostly regular HPF (High Performance Fortran programs but partially covers also message passing programs (MPI. P3T+ is unique by modeling programs, compiler code transformations, and parallel and distributed architectures. It computes at compile-time a variety of performance parameters including work distribution, number of transfers, amount of data transferred, transfer times, computation times, and number of cache misses. Several novel technologies are employed to compute these parameters: loop iteration spaces, array access patterns, and data distributions are modeled by employing highly effective symbolic analysis. Communication is estimated by simulating the behavior of a communication library used by the underlying compiler. Computation times are predicted through pre-measured kernels on every target architecture of interest. We carefully model most critical architecture specific factors such as cache lines sizes, number of cache lines available, startup times, message transfer time per byte, etc. P3T+ has been implemented and is closely integrated with the Vienna High Performance Compiler (VFC to support programmers develop parallel and distributed applications. Experimental results for realistic kernel codes taken from real-world applications are presented to demonstrate both accuracy and usefulness of P3T+.

  6. Cost of employee assistance programs: comparison of national estimates from 1993 and 1995.

    Science.gov (United States)

    French, M T; Zarkin, G A; Bray, J W; Hartwell, T D

    1999-02-01

    The cost and financing of mental health services is gaining increasing importance with the spread of managed care and cost-cutting measures throughout the health care system. The delivery of mental health services through structured employee assistance programs (EAPs) could be undermined by revised health insurance contracts and cutbacks in employer-provided benefits at the workplace. This study uses two recently completed national surveys of EAPs to estimate the costs of providing EAP services during 1993 and 1995. EAP costs are determined by program type, worksite size, industry, and region. In addition, information on program services is reported to determine the most common types and categories of services and whether service delivery changes have occurred between 1993 and 1995. The results of this study will be useful to EAP managers, mental health administrators, and mental health services researchers who are interested in the delivery and costs of EAP services.

  7. Erlang Programming A Concurrent Approach to Software Development

    CERN Document Server

    Cesarini, Francesco

    2009-01-01

    This book offers you an in-depth explanation of Erlang, a programming language ideal for any situation where concurrency, fault-tolerance, and fast response is essential. You'll learn how to write complex concurrent programs in this language, regardless of your programming background or experience. Erlang Programming focuses on the language's syntax and semantics, and explains pattern matching, proper lists, recursion, debugging, networking, and concurrency, with exercises at the end of each chapter.

  8. Turtle Graphics implementation using a graphical dataflow programming approach

    OpenAIRE

    Lovejoy, Robert Steven

    1992-01-01

    Approved for public release; distribution is unlimited This thesis expands the concepts of object-oriented programming to implement a visual dataflow programming language. The main thrust of this research is to develop a functional prototype language, based upon the Turtle Graphics tool provided by LOGO programming language, for children to develop both their problem solving skills as well as their general programming skills. The language developed for this thesis was implemented in the...

  9. A Fuzzy Linear Programming Approach for Aggregate Production Planning

    DEFF Research Database (Denmark)

    Iris, Cagatay; Cevikcan, Emre

    2014-01-01

    a mathematical programming framework for aggregate production planning problem under imprecise data environment. After providing background information about APP problem, together with fuzzy linear programming, the fuzzy linear programming model of APP is solved on an illustrative example for different a...

  10. A fixed recourse integer programming approach towards a ...

    African Journals Online (AJOL)

    Regardless of the success that linear programming and integer linear programming has had in applications in engineering, business and economics, one has to challenge the assumed reality that these optimization models represent. In this paper the certainty assumptions of an integer linear program application is ...

  11. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    Science.gov (United States)

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  12. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  13. A logic programming approach to medical errors in imaging.

    Science.gov (United States)

    Rodrigues, Susana; Brandão, Paulo; Nelas, Luís; Neves, José; Alves, Victor

    2011-09-01

    In 2000, the Institute of Medicine reported disturbing numbers on the scope it covers and the impact of medical error in the process of health delivery. Nevertheless, a solution to this problem may lie on the adoption of adverse event reporting and learning systems that can help to identify hazards and risks. It is crucial to apply models to identify the adverse events root causes, enhance the sharing of knowledge and experience. The efficiency of the efforts to improve patient safety has been frustratingly slow. Some of this insufficiency of progress may be assigned to the lack of systems that take into account the characteristic of the information about the real world. In our daily lives, we formulate most of our decisions normally based on incomplete, uncertain and even forbidden or contradictory information. One's knowledge is less based on exact facts and more on hypothesis, perceptions or indications. From the data collected on our adverse event treatment and learning system on medical imaging, and through the use of Extended Logic Programming to knowledge representation and reasoning, and the exploitation of new methodologies for problem solving, namely those based on the perception of what is an agent and/or multi-agent systems, we intend to generate reports that identify the most relevant causes of error and define improvement strategies, concluding about the impact, place of occurrence, form or type of event recorded in the healthcare institutions. The Eindhoven Classification Model was extended and adapted to the medical imaging field and used to classify adverse events root causes. Extended Logic Programming was used for knowledge representation with defective information, allowing for the modelling of the universe of discourse in terms of data and knowledge default. A systematization of the evolution of the body of knowledge about Quality of Information embedded in the Root Cause Analysis was accomplished. An adverse event reporting and learning system

  14. A New Approach to Commercialization of NASA's Human Research Program Technologies, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This Phase I SBIR proposal describes, "A New Approach to Commercialization of NASA's Human Research Program Technologies." NASA has a powerful research program that...

  15. A general approach for the estimation of loss of life due to natural and technological disasters

    International Nuclear Information System (INIS)

    Jonkman, S.N.; Lentz, A.; Vrijling, J.K.

    2010-01-01

    In assessing the safety of engineering systems in the context of quantitative risk analysis one of the most important consequence types concerns the loss of life due to accidents and disasters. In this paper, a general approach for loss of life estimation is proposed which includes three elements: (1) the assessment of physical effects associated with the event; (2) determination of the number of exposed persons (taking into account warning and evacuation); and (3) determination of mortality amongst the population exposed. The typical characteristics of and modelling approaches for these three elements are discussed. This paper focuses on 'small probability-large consequences' events within the engineering domain. It is demonstrated how the proposed approach can be applied to various case studies, such as tunnel fires, earthquakes and flood events.

  16. Technical Approach and Plan for Transitioning Spent Nuclear Fuel (SNF) Project Facilities to the Environmental Restoration Program

    International Nuclear Information System (INIS)

    SKELLY, W.A.

    1999-01-01

    This document describes the approach and process in which the 100-K Area Facilities are to be deactivated and transitioned over to the Environmental Restoration Program after spent nuclear fuel has been removed from the K Basins. It describes the Transition Project's scope and objectives, work breakdown structure, activity planning, estimated cost, and schedule. This report will be utilized as a planning document for project management and control and to communicate details of project content and integration

  17. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    Science.gov (United States)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  18. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    Directory of Open Access Journals (Sweden)

    Jan Horbowy

    Full Text Available Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR, which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  19. Estimating shaking-induced casualties and building damage for global earthquake events: a proposed modelling approach

    Science.gov (United States)

    So, Emily; Spence, Robin

    2013-01-01

    Recent earthquakes such as the Haiti earthquake of 12 January 2010 and the Qinghai earthquake on 14 April 2010 have highlighted the importance of rapid estimation of casualties after the event for humanitarian response. Both of these events resulted in surprisingly high death tolls, casualties and survivors made homeless. In the Mw = 7.0 Haiti earthquake, over 200,000 people perished with more than 300,000 reported injuries and 2 million made homeless. The Mw = 6.9 earthquake in Qinghai resulted in over 2,000 deaths with a further 11,000 people with serious or moderate injuries and 100,000 people have been left homeless in this mountainous region of China. In such events relief efforts can be significantly benefitted by the availability of rapid estimation and mapping of expected casualties. This paper contributes to ongoing global efforts to estimate probable earthquake casualties very rapidly after an earthquake has taken place. The analysis uses the assembled empirical damage and casualty data in the Cambridge Earthquake Impacts Database (CEQID) and explores data by event and across events to test the relationships of building and fatality distributions to the main explanatory variables of building type, building damage level and earthquake intensity. The prototype global casualty estimation model described here uses a semi-empirical approach that estimates damage rates for different classes of buildings present in the local building stock, and then relates fatality rates to the damage rates of each class of buildings. This approach accounts for the effect of the very different types of buildings (by climatic zone, urban or rural location, culture, income level etc), on casualties. The resulting casualty parameters were tested against the overall casualty data from several historical earthquakes in CEQID; a reasonable fit was found.

  20. Estimating the financial resources needed for local public health departments in Minnesota: a multimethod approach.

    Science.gov (United States)

    Riley, William; Briggs, Jill; McCullough, Mac

    2011-01-01

    This study presents a model for determining total funding needed for individual local health departments. The aim is to determine the financial resources needed to provide services for statewide local public health departments in Minnesota based on a gaps analysis done to estimate the funding needs. We used a multimethod analysis consisting of 3 approaches to estimate gaps in local public health funding consisting of (1) interviews of selected local public health leaders, (2) a Delphi panel, and (3) a Nominal Group Technique. On the basis of these 3 approaches, a consensus estimate of funding gaps was generated for statewide projections. The study includes an analysis of cost, performance, and outcomes from 2005 to 2007 for all 87 local governmental health departments in Minnesota. For each of the methods, we selected a panel to represent a profile of Minnesota health departments. The 2 main outcome measures were local-level gaps in financial resources and total resources needed to provide public health services at the local level. The total public health expenditure in Minnesota for local governmental public health departments was $302 million in 2007 ($58.92 per person). The consensus estimate of the financial gaps in local public health departments indicates that an additional $32.5 million (a 10.7% increase or $6.32 per person) is needed to adequately serve public health needs in the local communities. It is possible to make informed estimates of funding gaps for public health activities on the basis of a combination of quantitative methods. There is a wide variation in public health expenditure at the local levels, and methods are needed to establish minimum baseline expenditure levels to adequately treat a population. The gaps analysis can be used by stakeholders to inform policy makers of the need for improved funding of the public health system.

  1. An Integrated Approach for Characterization of Uncertainty in Complex Best Estimate Safety Assessment

    International Nuclear Information System (INIS)

    Pourgol-Mohamad, Mohammad; Modarres, Mohammad; Mosleh, Ali

    2013-01-01

    This paper discusses an approach called Integrated Methodology for Thermal-Hydraulics Uncertainty Analysis (IMTHUA) to characterize and integrate a wide range of uncertainties associated with the best estimate models and complex system codes used for nuclear power plant safety analyses. Examples of applications include complex thermal hydraulic and fire analysis codes. In identifying and assessing uncertainties, the proposed methodology treats the complex code as a 'white box', thus explicitly treating internal sub-model uncertainties in addition to the uncertainties related to the inputs to the code. The methodology accounts for uncertainties related to experimental data used to develop such sub-models, and efficiently propagates all uncertainties during best estimate calculations. Uncertainties are formally analyzed and probabilistically treated using a Bayesian inference framework. This comprehensive approach presents the results in a form usable in most other safety analyses such as the probabilistic safety assessment. The code output results are further updated through additional Bayesian inference using any available experimental data, for example from thermal hydraulic integral test facilities. The approach includes provisions to account for uncertainties associated with user-specified options, for example for choices among alternative sub-models, or among several different correlations. Complex time-dependent best-estimate calculations are computationally intense. The paper presents approaches to minimize computational intensity during the uncertainty propagation. Finally, the paper will report effectiveness and practicality of the methodology with two applications to a complex thermal-hydraulics system code as well as a complex fire simulation code. In case of multiple alternative models, several techniques, including dynamic model switching, user-controlled model selection, and model mixing, are discussed. (authors)

  2. On the validity of the incremental approach to estimate the impact of cities on air quality

    Science.gov (United States)

    Thunis, Philippe

    2018-01-01

    The question of how much cities are the sources of their own air pollution is not only theoretical as it is critical to the design of effective strategies for urban air quality planning. In this work, we assess the validity of the commonly used incremental approach to estimate the likely impact of cities on their air pollution. With the incremental approach, the city impact (i.e. the concentration change generated by the city emissions) is estimated as the concentration difference between a rural background and an urban background location, also known as the urban increment. We show that the city impact is in reality made up of the urban increment and two additional components and consequently two assumptions need to be fulfilled for the urban increment to be representative of the urban impact. The first assumption is that the rural background location is not influenced by emissions from within the city whereas the second requires that background concentration levels, obtained with zero city emissions, are equal at both locations. Because the urban impact is not measurable, the SHERPA modelling approach, based on a full air quality modelling system, is used in this work to assess the validity of these assumptions for some European cities. Results indicate that for PM2.5, these two assumptions are far from being fulfilled for many large or medium city sizes. For this type of cities, urban increments are largely underestimating city impacts. Although results are in better agreement for NO2, similar issues are met. In many situations the incremental approach is therefore not an adequate estimate of the urban impact on air pollution. This poses issues in terms of interpretation when these increments are used to define strategic options in terms of air quality planning. We finally illustrate the interest of comparing modelled and measured increments to improve our confidence in the model results.

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    Science.gov (United States)

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  5. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    Science.gov (United States)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  6. Development of a low-maintenance measurement approach to continuously estimate methane emissions: A case study.

    Science.gov (United States)

    Riddick, S N; Hancock, B R; Robinson, A D; Connors, S; Davies, S; Allen, G; Pitt, J; Harris, N R P

    2018-03-01

    The chemical breakdown of organic matter in landfills represents a significant source of methane gas (CH 4 ). Current estimates suggest that landfills are responsible for between 3% and 19% of global anthropogenic emissions. The net CH 4 emissions resulting from biogeochemical processes and their modulation by microbes in landfills are poorly constrained by imprecise knowledge of environmental constraints. The uncertainty in absolute CH 4 emissions from landfills is therefore considerable. This study investigates a new method to estimate the temporal variability of CH 4 emissions using meteorological and CH 4 concentration measurements downwind of a landfill site in Suffolk, UK from July to September 2014, taking advantage of the statistics that such a measurement approach offers versus shorter-term, but more complex and instantaneously accurate, flux snapshots. Methane emissions were calculated from CH 4 concentrations measured 700m from the perimeter of the landfill with observed concentrations ranging from background to 46.4ppm. Using an atmospheric dispersion model, we estimate a mean emission flux of 709μgm -2 s -1 over this period, with a maximum value of 6.21mgm -2 s -1 , reflecting the wide natural variability in biogeochemical and other environmental controls on net site emission. The emissions calculated suggest that meteorological conditions have an influence on the magnitude of CH 4 emissions. We also investigate the factors responsible for the large variability observed in the estimated CH 4 emissions, and suggest that the largest component arises from uncertainty in the spatial distribution of CH 4 emissions within the landfill area. The results determined using the low-maintenance approach discussed in this paper suggest that a network of cheaper, less precise CH 4 sensors could be used to measure a continuous CH 4 emission time series from a landfill site, something that is not practical using far-field approaches such as tracer release methods

  7. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    Science.gov (United States)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  8. 2005 Status Report Savings Estimates for the ENERGY STAR(R)Voluntary Labeling Program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; Sanchez, Marla

    2006-03-07

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), Energy Star labels exist for more thanforty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2004, whatwe expect in 2005, and provide savings forecasts for two marketpenetration scenarios for the periods 2005 to 2010 and 2005 to 2020. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  9. 2004 status report: Savings estimates for the Energy Star(R)voluntarylabeling program

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla

    2004-03-09

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), ENERGY STAR labels exist for more thanthirty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2003, whatwe expect in 2004, and provide savings forecasts for two marketpenetration scenarios for the periods 2004 to 2010 and 2004 to 2020. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  10. 2007 Status Report: Savings Estimates for the ENERGY STAR(R)VoluntaryLabeling Program

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Marla; Webber, Carrie A.; Brown, Richard E.; Homan,Gregory K.

    2007-03-23

    ENERGY STAR(R) is a voluntary labeling program designed toidentify and promote energy-efficient products, buildings and practices.Operated jointly by the Environmental Protection Agency (EPA) and theU.S. Department of Energy (DOE), ENERGY STAR labels exist for more thanthirty products, spanning office equipment, residential heating andcooling equipment, commercial and residential lighting, home electronics,and major appliances. This report presents savings estimates for a subsetof ENERGY STAR labeled products. We present estimates of the energy,dollar and carbon savings achieved by the program in the year 2006, whatwe expect in 2007, and provide savings forecasts for two marketpenetration scenarios for the periods 2007 to 2015 and 2007 to 2025. Thetarget market penetration forecast represents our best estimate of futureENERGY STAR savings. It is based on realistic market penetration goalsfor each of the products. We also provide a forecast under the assumptionof 100 percent market penetration; that is, we assume that all purchasersbuy ENERGY STAR-compliant products instead of standard efficiencyproducts throughout the analysis period.

  11. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  12. Best estimate approach for the evaluation of critical heat flux phenomenon in the boiling water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kaliatka, Tadas; Kaliatka, Algirdas; Uspuras, Eudenijus; Vaisnoras, Mindaugas [Lithuanian Energy Institute, Kaunas (Lithuania); Mochizuki, Hiroyasu; Rooijen, W.F.G. van [Fukui Univ. (Japan). Research Inst. of Nuclear Engineering

    2017-05-15

    Because of the uncertainties associated with the definition of Critical Heat Flux (CHF), the best estimate approach should be used. In this paper the application of best-estimate approach for the analysis of CHF phenomenon in the boiling water reactors is presented. At first, the nodalization of RBMK-1500, BWR-5 and ABWR fuel assemblies were developed using RELAP5 code. Using developed models the CHF and Critical Heat Flux Ratio (CHFR) for different types of reactors were evaluated. The calculation results of CHF were compared with the well-known experimental data for light water reactors. The uncertainty and sensitivity analysis of ABWR 8 x 8 fuel assembly CHFR calculation result was performed using the GRS (Germany) methodology with the SUSA tool. Finally, the values of Minimum Critical Power Ratio (MCPR) were calculated for RBMK-1500, BWR-5 and ABWR fuel assemblies. The paper demonstrate how, using the results of sensitivity analysis, to receive the MCPR values, which covers all uncertainties and remains best estimated.

  13. Artificial neural network approach to spatial estimation of wind velocity data

    International Nuclear Information System (INIS)

    Oztopal, Ahmet

    2006-01-01

    In any regional wind energy assessment, equal wind velocity or energy lines provide a common basis for meaningful interpretations that furnish essential information for proper design purposes. In order to achieve regional variation descriptions, there are methods of optimum interpolation with classical weighting functions or variogram methods in Kriging methodology. Generally, the weighting functions are logically and geometrically deduced in a deterministic manner, and hence, they are imaginary first approximations for regional variability assessments, such as wind velocity. Geometrical weighting functions are necessary for regional estimation of the regional variable at a location with no measurement, which is referred to as the pivot station from the measurements of a set of surrounding stations. In this paper, weighting factors of surrounding stations necessary for the prediction of a pivot station are presented by an artificial neural network (ANN) technique. The wind speed prediction results are compared with measured values at a pivot station. Daily wind velocity measurements in the Marmara region from 1993 to 1997 are considered for application of the ANN methodology. The model is more appropriate for winter period daily wind velocities, which are significant for energy generation in the study area. Trigonometric point cumulative semivariogram (TPCSV) approach results are compared with the ANN estimations for the same set of data by considering the correlation coefficient (R). Under and over estimation problems in objective analysis can be avoided by the ANN approach

  14. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    Science.gov (United States)

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  15. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  16. An estimation of crude oil import demand in Turkey: Evidence from time-varying parameters approach

    International Nuclear Information System (INIS)

    Ozturk, Ilhan; Arisoy, Ibrahim

    2016-01-01

    The aim of this study is to model crude oil import demand and estimate the price and income elasticities of imported crude oil in Turkey based on a time-varying parameters (TVP) approach with the aim of obtaining accurate and more robust estimates of price and income elasticities. This study employs annual time series data of domestic oil consumption, real GDP, and oil price for the period 1966–2012. The empirical results indicate that both the income and price elasticities are in line with the theoretical expectations. However, the income elasticity is statistically significant while the price elasticity is statistically insignificant. The relatively high value of income elasticity (1.182) from this study suggests that crude oil import in Turkey is more responsive to changes in income level. This result indicates that imported crude oil is a normal good and rising income levels will foster higher consumption of oil based equipments, vehicles and services by economic agents. The estimated income elasticity of 1.182 suggests that imported crude oil consumption grows at a higher rate than income. This in turn reduces oil intensity over time. Therefore, crude oil import during the estimation period is substantially driven by income. - Highlights: • We estimated the price and income elasticities of imported crude oil in Turkey. • Income elasticity is statistically significant and it is 1.182. • The price elasticity is statistically insignificant. • Crude oil import in Turkey is more responsive to changes in income level. • Crude oil import during the estimation period is substantially driven by income.

  17. Estimating the Risk of Tropical Cyclone Characteristics Along the United States Gulf of Mexico Coastline Using Different Statistical Approaches

    Science.gov (United States)

    Trepanier, J. C.; Ellis, K.; Jagger, T.; Needham, H.; Yuan, J.

    2017-12-01

    Tropical cyclones, with their high wind speeds, high rainfall totals and deep storm surges, frequently strike the United States Gulf of Mexico coastline influencing millions of people and disrupting off shore economic activities. Events, such as Hurricane Katrina in 2005 and Hurricane Isaac in 2012, can be physically different but still provide detrimental effects due to their locations of influence. There are a wide variety of ways to estimate the risk of occurrence of extreme tropical cyclones. Here, the combined risk of tropical cyclone storm surge and nearshore wind speed using a statistical copula is provided for 22 Gulf of Mexico coastal cities. Of the cities considered, Bay St. Louis, Mississippi has the shortest return period for a tropical cyclone with at least a 50 m s-1 nearshore wind speed and a three meter surge (19.5 years, 17.1-23.5). Additionally, a multivariate regression model is provided estimating the compound effects of tropical cyclone tracks, landfall central pressure, the amount of accumulated precipitation, and storm surge for five locations around Lake Pontchartrain in Louisiana. It is shown the most intense tropical cyclones typically approach from the south and a small change in the amount of rainfall or landfall central pressure leads to a large change in the final storm surge depth. Data are used from the National Hurricane Center, U-Surge, SURGEDAT, and Cooperative Observer Program. The differences in the two statistical approaches are discussed, along with the advantages and limitations to each. The goal of combining the results of the two studies is to gain a better understanding of the most appropriate risk estimation technique for a given area.

  18. Positive Mathematical Programming Approaches – Recent Developments in Literature and Applied Modelling

    Directory of Open Access Journals (Sweden)

    Thomas Heckelei

    2012-05-01

    Full Text Available This paper reviews and discusses the more recent literature and application of Positive Mathematical Programming in the context of agricultural supply models. Specifically, advances in the empirical foundation of parameter specifications as well as the economic rationalisation of PMP models – both criticized in earlier reviews – are investigated. Moreover, the paper provides an overview on a larger set of models with regular/repeated policy application that apply variants of PMP. Results show that most applications today avoid arbitrary parameter specifications and rely on exogenous information on supply responses to calibrate model parameters. However, only few approaches use multiple observations to estimate parameters, which is likely due to the still considerable technical challenges associated with it. Equally, we found only limited reflection on the behavioral or technological assumptions that could rationalise the PMP model structure while still keeping the model’s advantages.

  19. An approach for evaluating the market effects of energy efficiency programs

    International Nuclear Information System (INIS)

    Vine, E.; Prahl, R.; Meyers, S.; Turiel, I.

    2010-01-01

    This paper presents work currently being carried out in California on evaluating market effects. We first outline an approach for conducting market effect studies that includes the six key steps that were developed in study plans: (1) a scoping study that characterizes a particular market, reviews relevant market effects studies, develops integrated market and program theories, and identifies market indicators; (2) analysis of market evolution, using existing data sources; (3) analysis of market effects, based on sales data and interviews with key market actors; (4) analysis of attribution; (5) estimation of energy savings; and (6) assessment of sustainability (i.e., the extent to which any observed market effects are likely to persist in the absence or reduction of public intervention, and thus has helped to transform the market). We describe the challenges in conducting this type of analysis (1) selecting a comparison state(s) to California for a baseline, (2) availability and quality of data (limiting analyses), (3) inconsistent patterns of results, and (4) conducting market effects evaluations at one point in time, without the benefit of years of accumulated research findings, and then provide some suggestions for future research on the evaluation of market effects. With the promulgation of market transformation programs, the evaluation of market effects will be critical. We envision that these market effects studies will help lay the foundation for the refinement of techniques for measuring the impacts of programs that seek to transform markets for energy efficiency products and practices.

  20. Estimating wetland connectivity to streams in the Prairie Pothole Region: An isotopic and remote sensing approach

    Science.gov (United States)

    Brooks, J. R.; Mushet, David M.; Vanderhoof, Melanie; Leibowitz, Scott G.; Neff, Brian; Christensen, J. R.; Rosenberry, Donald O.; Rugh, W. D.; Alexander, L.C.

    2018-01-01

    Understanding hydrologic connectivity between wetlands and perennial streams is critical to understanding the reliance of stream flow on inputs from wetlands. We used the isotopic evaporation signal in water and remote sensing to examine wetland‐stream hydrologic connectivity within the Pipestem Creek watershed, North Dakota, a watershed dominated by prairie‐pothole wetlands. Pipestem Creek exhibited an evaporated‐water signal that had approximately half the isotopic‐enrichment signal found in most evaporatively enriched prairie‐pothole wetlands. Groundwater adjacent to Pipestem Creek had isotopic values that indicated recharge from winter precipitation and had no significant evaporative enrichment, indicating that enriched surface water did not contribute significantly to groundwater discharging into Pipestem Creek. The estimated surface water area necessary to generate the evaporation signal within Pipestem Creek was highly dynamic, varied primarily with the amount of discharge, and was typically greater than the immediate Pipestem Creek surface water area, indicating that surficial flow from wetlands contributed to stream flow throughout the summer. We propose a dynamic range of spilling thresholds for prairie‐pothole wetlands across the watershed allowing for wetland inputs even during low‐flow periods. Combining Landsat estimates with the isotopic approach allowed determination of potential (Landsat) and actual (isotope) contributing areas in wetland‐dominated systems. This combined approach can give insights into the changes in location and magnitude of surface water and groundwater pathways over time. This approach can be used in other areas where evaporation from wetlands results in a sufficient evaporative isotopic signal.

  1. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    Science.gov (United States)

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  2. A statistical approach to estimating soil-to-plant transfer factor of strontium in agricultural fields

    International Nuclear Information System (INIS)

    Ishikawa, Nao; Tagami, Keiko; Uchida, Shigeo

    2009-01-01

    Soil-to-plant transfer factor (TF) is one of the important parameters in radiation dose assessment models for the environmental transfer of radionuclides. Since TFs are affected by several factors, including radionuclides, plant species and soil properties, development of a method for estimation of TF using some soil and plant properties would be useful. In this study, we took a statistical approach to estimating the TF of stable strontium (TF Sr ) from selected soil properties and element concentrations in plants, which was used as an analogue of 90 Sr. We collected the plant and soil samples used for the study from 142 agricultural fields throughout Japan. We applied a multiple linear regression analysis in order to get an empirical equation to estimate TF Sr . TF Sr could be estimated from the Sr concentration in soil (C Sr soil ) and Ca concentration in crop (C Ca crop ) using the following equation: log TF Sr =-0.88·log C Sr soil +0.93·log C Ca crop -2.53. Then, we replaced our data with Ca concentrations in crops from a food composition database compiled by the Japanese government. Finally, we predicted TF Sr using Sr concentration in soil from our data and Ca concentration in crops from the database of food composition. (author)

  3. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  4. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  5. A hybrid system approach to airspeed, angle of attack and sideslip estimation in Unmanned Aerial Vehicles

    KAUST Repository

    Shaqura, Mohammad

    2015-06-01

    Fixed wing Unmanned Aerial Vehicles (UAVs) are an increasingly common sensing platform, owing to their key advantages: speed, endurance and ability to explore remote areas. While these platforms are highly efficient, they cannot easily be equipped with air data sensors commonly found on their larger scale manned counterparts. Indeed, such sensors are bulky, expensive and severely reduce the payload capability of the UAVs. In consequence, UAV controllers (humans or autopilots) have little information on the actual mode of operation of the wing (normal, stalled, spin) which can cause catastrophic losses of control when flying in turbulent weather conditions. In this article, we propose a real-time air parameter estimation scheme that can run on commercial, low power autopilots in real-time. The computational method is based on a hybrid decomposition of the modes of operation of the UAV. A Bayesian approach is considered for estimation, in which the estimated airspeed, angle of attack and sideslip are described statistically. An implementation on a UAV is presented, and the performance and computational efficiency of this method are validated using hardware in the loop (HIL) simulation and experimental flight data and compared with classical Extended Kalman Filter estimation. Our benchmark tests shows that this method is faster than EKF by up to two orders of magnitude. © 2015 IEEE.

  6. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  7. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms

    Directory of Open Access Journals (Sweden)

    Bangyan Zhu

    2016-07-01

    Full Text Available Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.

  8. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  9. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  10. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  11. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    Science.gov (United States)

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach

  12. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    Science.gov (United States)

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The

  13. Approaching bathymetry estimation from high resolution multispectral satellite images using a neuro-fuzzy technique

    Science.gov (United States)

    Corucci, Linda; Masini, Andrea; Cococcioni, Marco

    2011-01-01

    This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.

  14. Estimation of macroscopic elastic characteristics for hierarchical anisotropic solids based on probabilistic approach

    Science.gov (United States)

    Smolina, Irina Yu.

    2015-10-01

    Mechanical properties of a cable are of great importance in design and strength calculation of flexible cables. The problem of determination of elastic properties and rigidity characteristics of a cable modeled by anisotropic helical elastic rod is considered. These characteristics are calculated indirectly by means of the parameters received from statistical processing of experimental data. These parameters are considered as random quantities. With taking into account probable nature of these parameters the formulas for estimation of the macroscopic elastic moduli of a cable are obtained. The calculating expressions for macroscopic flexural rigidity, shear rigidity and torsion rigidity using the macroscopic elastic characteristics obtained before are presented. Statistical estimations of the rigidity characteristics of some cable grades are adduced. A comparison with those characteristics received on the basis of deterministic approach is given.

  15. Estimation of the neuronal activation using fMRI data: An observer-based approach

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2013-06-01

    This paper deals with the estimation of the neuronal activation and some unmeasured physiological information using the Blood Oxygenation Level Dependent (BOLD) signal measured using functional Magnetic Resonance Imaging (fMRI). We propose to use an observer-based approach applied to the balloon hemodynamic model. The latter describes the relation between the neural activity and the BOLD signal. The balloon model can be expressed in a nonlinear state-space representation where the states, the parameters and the input (neuronal activation), are unknown. This study focuses only on the estimation of the hidden states and the neuronal activation. The model is first linearized around the equilibrium and an observer is applied to this linearized version. Numerical results performed on synthetic data are presented.

  16. An EKF-based approach for estimating leg stiffness during walking.

    Science.gov (United States)

    Ochoa-Diaz, Claudia; Menegaz, Henrique M; Bó, Antônio P L; Borges, Geovany A

    2013-01-01

    The spring-like behavior is an inherent condition for human walking and running. Since leg stiffness k(leg) is a parameter that cannot be directly measured, many techniques has been proposed in order to estimate it, most of them using force data. This paper intends to address this problem using an Extended Kalman Filter (EKF) based on the Spring-Loaded Inverted Pendulum (SLIP) model. The formulation of the filter only uses as measurement information the Center of Mass (CoM) position and velocity, no a priori information about the stiffness value is known. From simulation results, it is shown that the EKF-based approach can generate a reliable stiffness estimation for walking.

  17. Parameter estimation of an ARMA model for river flow forecasting using goal programming

    Science.gov (United States)

    Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene

    2006-11-01

    SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.

  18. New Approaches for Very Large-Scale Integer Programming

    Science.gov (United States)

    2016-06-24

    DISTRIBUTION/ AVAILABILITY STATEMENT Approved for Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT The focus of this project is new computational... heuristics for integer programs in order to rapidly improve dual bounds. 2. Choosing good branching variables in branch-and-bound algorithms for MIP. 3...programming, algorithms, parallel processing, machine learning, heuristics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF

  19. Creating a foundation for a synergistic approach to program management

    Science.gov (United States)

    Knoll, Karyn T.

    1992-01-01

    In order to accelerate the movement of humans into space within reasonable budgetary constraints, NASA must develop an organizational structure that will allow the agency to efficiently use all the resources it has available for the development of any program the nation decides to undertake. This work considers the entire set of tasks involved in the successful development of any program. Areas that hold the greatest promise of accelerating programmatic development and/or increasing the efficiency of the use of available resources by being dealt with in a centralized manner rather than being handled by each program individually are identified. Using this information, an agency organizational structure is developed that will allow NASA to promote interprogram synergisms. In order for NASA to efficiently manage its programs in a manner that will allow programs to benefit from one another and thereby accelerate the movement of humans into space, several steps must be taken. First, NASA must develop an organizational structure that will allow potential interprogram synergisms to be identified and promoted. Key features of the organizational structure are recommended in this paper. Second, NASA must begin to develop the requirements for a program in a manner that will promote overall space program goals rather than achieving only the goals that apply to the program for which the requirements are being developed. Finally, NASA must consider organizing the agency around the functions required to support NASA's goals and objectives rather than around geographic locations.

  20. An Approach to Effortless Construction of Program Animations

    Science.gov (United States)

    Velazquez-Iturbide, J. Angel; Pareja-Flores, Cristobal; Urquiza-Fuentes, Jaime

    2008-01-01

    Program animation systems have not been as widely adopted by computer science educators as we might expect from the firm belief that they can help in enhancing computer science education. One of the most notable obstacles to their adoption is the considerable effort that the production of program animations represents for the instructor. We…

  1. Prevalent Approaches to Professional Development in State 4-H Programs

    Science.gov (United States)

    Smith, Martin H.; Worker, Steven M.; Schmitt-McQuitty, Lynn; Meehan, Cheryl L.; Lewis, Kendra M.; Schoenfelder, Emily; Brian, Kelley

    2017-01-01

    High-quality 4-H programming requires effective professional development of educators. Through a mixed methods study, we explored professional development offered through state 4-H programs. Survey results revealed that both in-person and online delivery modes were used commonly for 4-H staff and adult volunteers; for teen volunteers, in-person…

  2. Winning One Program at a Time: A Systemic Approach

    Science.gov (United States)

    Schultz, Adam; Zimmerman, Kay

    2016-01-01

    Many Universities are missing an opportunity to focus student recruitment marketing efforts and budget at the program level, which can offer lower priced advertising opportunities with higher conversion rates than traditional University level marketing initiatives. At NC State University, we have begun to deploy a scalable, low-cost, program level…

  3. Estimating the cost of improving quality in electricity distribution: A parametric distance function approach

    International Nuclear Information System (INIS)

    Coelli, Tim J.; Gautier, Axel; Perelman, Sergio; Saplacan-Pop, Roxana

    2013-01-01

    The quality of electricity distribution is being more and more scrutinized by regulatory authorities, with explicit reward and penalty schemes based on quality targets having been introduced in many countries. It is then of prime importance to know the cost of improving the quality for a distribution system operator. In this paper, we focus on one dimension of quality, the continuity of supply, and we estimated the cost of preventing power outages. For that, we make use of the parametric distance function approach, assuming that outages enter in the firm production set as an input, an imperfect substitute for maintenance activities and capital investment. This allows us to identify the sources of technical inefficiency and the underlying trade-off faced by operators between quality and other inputs and costs. For this purpose, we use panel data on 92 electricity distribution units operated by ERDF (Electricité de France - Réseau Distribution) in the 2003–2005 financial years. Assuming a multi-output multi-input translog technology, we estimate that the cost of preventing one interruption is equal to 10.7€ for an average DSO. Furthermore, as one would expect, marginal quality improvements tend to be more expensive as quality itself improves. - Highlights: ► We estimate the implicit cost of outages for the main distribution company in France. ► For this purpose, we make use of a parametric distance function approach. ► Marginal quality improvements tend to be more expensive as quality itself improves. ► The cost of preventing one interruption varies from 1.8 € to 69.2 € (2005 prices). ► We estimate that, in average, it lays 33% above the regulated price of quality.

  4. Estimating a WTP-based value of a QALY: the 'chained' approach.

    Science.gov (United States)

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Fusion material development program in the broader approach activities

    Energy Technology Data Exchange (ETDEWEB)

    Nishitani, T. [Directorates of Fusion Energy Research: Naka, Ibaraki, Japan Atomic Energy Agency, Naka, Ibaraki (Japan); Tanigawa, H.; Jitsukawa, S. [Japan Atomic Energy Agency, Tokai-mura, Naga-gun, Ibaraki-ken (Japan); Hayashi, K.; Takatsu, H. [Fusion Research and Development Directorate, Japan Momie Energy Agency, Ibaraki-ken (Japan); Yamanishi, T. [Tritium Process Laboratory, Japan Atomic Energy Research Institute, Tokai-mura, Ibaraki-ken (Japan); Tsuchiya, K. [Directorates of Fusion Energy Research, JAEA, Higashi-ibaraki-gun, Ibaraki-ken (Japan); MoIslang, A. [Forschungszentrum Karlsruhe GmbH, FZK, Karlsruhe (Germany); Baluc, N. [EPFL-Ecole Polytechnique Federale de Lausanne, Association Euratom-Confederation Suisse, UHD - CRPP, PPB, Lausanne (Switzerland); Pizzuto, A. [ENEA CR Frascat, Frascati (Italy); Hodgson, E.R. [CIEMAT-Centro de Investigaciones Energeticas Medioambientales y Tecnologicas, Association Euratom-CIEMAT, Madrid (Spain); Lasser, R.; Gasparotto, M. [EFDA CSU Garching (Germany)

    2007-07-01

    Full text of publication follows: The world fusion community is now launching construction of ITER, the first nuclear-grade fusion machine in the world. In parallel to the ITER program, Broader Approach (BA) activities are initiated by EU and Japan, mainly at Rokkasho BA site in Japan. The BA activities include the International Fusion Materials Irradiation Facility-Engineering Validation and Engineering Design Activities (IFMIF-EVEDA), the International Fusion Energy Research Center (IFERC), and the Satellite Tokamak. IFERC consists of three sub project; a DEMO Design and R and D coordination Center, a Computational Simulation Center, and an ITER Remote Experimentation Center. Technical R and Ds mainly on fusion materials will be implemented as a part of the DEMO Design and R and D coordination Center. Based on the common interest of each party toward DEMO, R and Ds on a) reduced activation ferritic martensitic (RAFM) steels as a DEMO blanket structural material, SiCf/SiC composites, advanced tritium breeders and neutron multiplier for DEMO blankets, and Tritium Technology were selected and assessed by European and Japanese experts. In the R and D on the RAFM steels, the fabrication technology, techniques to incorporate the fracture/rupture properties of the irradiated materials, and methods to predict the deformation and fracture behaviors of structures under irradiation will be investigated. For SiCf/SiC composites, standard methods to evaluate high-temperature and life-time properties will be developed. Not only for SiCf/SiC but also related ceramics, physical and chemical properties such as He and H permeability and absorption will be investigated under irradiation. As the advanced tritium breeder R and D, Japan and EU plan to establish the production technique for advanced breeder pebbles of Li{sub 2}TiO{sub 3} and Li{sub 4}SiO{sub 4}, respectively. Also physical, chemical, and mechanical properties will be investigated for produced breeder pebbles. For the

  6. Estimate of the area occupied by reforestation programs in Rio de Janeiro state

    Directory of Open Access Journals (Sweden)

    Hugo Barbosa Amorim

    2012-03-01

    Full Text Available This study was based on a preliminary survey and inventory of existing reforestation programs in Rio de Janeiro state, through geoprocessing techniques and collection of field data. The reforested area was found to occupy 18,426.96 ha, which amounts to 0.42% of the territory of the state. Much of reforestation programs consists of eucalyptus (98%, followed by pine plantations (0.8%, and the remainder is distributed among 10 other species. The Médio Paraíba region was found to contribute the most to the reforested area of the state (46.6%. The estimated volume of eucalyptus timber was nearly two million cubic meters. This study helped crystallize the ongoing perception among those militating in the forestry sector of Rio de Janeiro state that the planted area and stock of reforestation timber is still incipient in the state.

  7. TRAC-PF1: an advanced best-estimate computer program for pressurized water reactor analysis

    International Nuclear Information System (INIS)

    Liles, D.R.; Mahaffy, J.H.

    1984-02-01

    The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos National Laboratory to provide advanced best-estimate predictions of postulated accidents in light water reactors. The TRAC-PF1 program provides this capability for pressurized water reactors and for many thermal-hydraulic experimental facilities. The code features either a one-dimensional or a three-dimensional treatment of the pressure vessel and its associated internals; a two-phase, two-fluid nonequilibrium hydrodynamics model with a noncondensable gas field; flow-regime-dependent constitutive equation treatment; optional reflood tracking capability for both bottom flood and falling-film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions. This report describes the thermal-hydraulic models and the numerical solution methods used in the code. Detailed programming and user information also are provided

  8. Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.

    Science.gov (United States)

    Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin

    2018-04-03

    Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.

  9. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  10. EFFAIR: a computer program for estimating the dispersion of atmospheric emissions from a nuclear site

    International Nuclear Information System (INIS)

    Dormuth, K.W.; Lyon, R.B.

    1978-11-01

    Analysis of the transport of material through the turbulent atmospheric boundary layer is an important part of environmental impact assessments for nuclear plants. Although this is a complex phenomenon, practical estimates of ground level concentrations downwind of release are usually obtained using a simple Gaussian formula whose coefficients are obtained from empirical correlations. Based on this formula, the computer program EFFAIR has been written to provide a flexible tool for atmospheric dispersion calculations. It is considered appropriate for calculating dilution factors at distances of 10 2 to 10 4 metres from an effluent source if reflection from the inversion lid is negligible in that range. (author)

  11. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  12. On the plausibility of socioeconomic mortality estimates derived from linked data: a demographic approach.

    Science.gov (United States)

    Lerch, Mathias; Spoerri, Adrian; Jasilionis, Domantas; Viciana Fernandèz, Francisco

    2017-07-14

    Reliable estimates of mortality according to socioeconomic status play a crucial role in informing the policy debate about social inequality, social cohesion, and exclusion as well as about the reform of pension systems. Linked mortality data have become a gold standard for monitoring socioeconomic differentials in survival. Several approaches have been proposed to assess the quality of the linkage, in order to avoid the misclassification of deaths according to socioeconomic status. However, the plausibility of mortality estimates has never been scrutinized from a demographic perspective, and the potential problems with the quality of the data on the at-risk populations have been overlooked. Using indirect demographic estimation (i.e., the synthetic extinct generation method), we analyze the plausibility of old-age mortality estimates according to educational attainment in four European data contexts with different quality issues: deterministic and probabilistic linkage of deaths, as well as differences in the methodology of the collection of educational data. We evaluate whether the at-risk population according to educational attainment is misclassified and/or misestimated, correct these biases, and estimate the education-specific linkage rates of deaths. The results confirm a good linkage of death records within different educational strata, even when probabilistic matching is used. The main biases in mortality estimates concern the classification and estimation of the person-years of exposure according to educational attainment. Changes in the census questions about educational attainment led to inconsistent information over time, which misclassified the at-risk population. Sample censuses also misestimated the at-risk populations according to educational attainment. The synthetic extinct generation method can be recommended for quality assessments of linked data because it is capable not only of quantifying linkage precision, but also of tracking problems in

  13. Use of risk projection models to estimate mortality and incidence from radiation-induced breast cancer in screening programs

    International Nuclear Information System (INIS)

    Ramos, M; Ferrer, S; Villaescusa, J I; Verdu, G; Salas, M D; Cuevas, M D

    2005-01-01

    The authors report on a method to calculate radiological risks, applicable to breast screening programs and other controlled medical exposures to ionizing radiation. In particular, it has been applied to make a risk assessment in the Valencian Breast Cancer Early Detection Program (VBCEDP) in Spain. This method is based on a parametric approach, through Markov processes, of hazard functions for radio-induced breast cancer incidence and mortality, with mean glandular breast dose, attained age and age-at-exposure as covariates. Excess relative risk functions of breast cancer mortality have been obtained from two different case-control studies exposed to ionizing radiation, with different follow-up time: the Canadian Fluoroscopy Cohort Study (1950-1987) and the Life Span Study (1950-1985 and 1950-1990), whereas relative risk functions for incidence have been obtained from the Life Span Study (1958-1993), the Massachusetts tuberculosis cohorts (1926-1985 and 1970-1985), the New York post-partum mastitis patients (1930-1981) and the Swedish benign breast disease cohort (1958-1987). Relative risks from these cohorts have been transported to the target population undergoing screening in the Valencian Community, a region in Spain with about four and a half million inhabitants. The SCREENRISK software has been developed to estimate radiological detriments in breast screening. Some hypotheses corresponding to different screening conditions have been considered in order to estimate the total risk associated with a woman who takes part in all screening rounds. In the case of the VBCEDP, the total radio-induced risk probability for fatal breast cancer is in a range between [5 x 10 -6 , 6 x 10 -4 ] versus the natural rate of dying from breast cancer in the Valencian Community which is 9.2 x 10 -3 . The results show that these indicators could be included in quality control tests and could be adequate for making comparisons between several screening programs

  14. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  15. Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming

    Science.gov (United States)

    Yeh, K.; Wei, H.; Chen, L.; Liu, G.

    2010-12-01

    Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming Keh-Chia Yeha, Hsiao-Ping Weia,d, Li Chenb, and Gin-Rong Liuc a Department of Civil Engineering, National Chiao Tung University, Hsinchu, Taiwan, 300, R.O.C. b Department of Civil Engineering and Engineering Informatics, Chung Hua University, Hsinchu, Taiwan, 300, R.O.C. c Center for Space and Remote Sensing Research, National Central University, Tao-Yuan, Taiwan, 320, R.O.C. d National Science and Technology Center for Disaster Reduction, Taipei County, Taiwan, 231, R.O.C. Abstract This paper proposes an improved multi-run genetic programming (GP) and applies it to predict the rainfall using meteorological satellite data. GP is a well-known evolutionary programming and data mining method, used to automatically discover the complex relationships among nonlinear systems. The main advantage of GP is to optimize appropriate types of function and their associated coefficients simultaneously. This study makes an improvement to enhance escape ability from local optimums during the optimization procedure. The GP continuously runs several times by replacing the terminal nodes at the next run with the best solution at the current run. The current novel model improves GP, obtaining a highly nonlinear mathematical equation to estimate the rainfall. In the case study, this improved GP described above combining with SSM/I satellite data is employed to establish a suitable method for estimating rainfall at sea surface during typhoon periods. These estimated rainfalls are then verified with the data from four rainfall stations located at Peng-Jia-Yu, Don-Gji-Dao, Lan-Yu, and Green Island, which are four small islands around Taiwan. From the results, the improved GP can generate sophisticated and accurate nonlinear mathematical equation through two-run learning procedures which outperforms the traditional multiple linear regression, empirical equations and back-propagated network

  16. Application of Best Estimate Approach for Modelling of QUENCH-03 and QUENCH-06 Experiments

    Directory of Open Access Journals (Sweden)

    Tadas Kaliatka

    2016-04-01

    In this article, the QUENCH-03 and QUENCH-06 experiments are modelled using ASTEC and RELAP/SCDAPSIM codes. For the uncertainty and sensitivity analysis, SUSA3.5 and SUNSET tools were used. The article demonstrates that applying the best estimate approach, it is possible to develop basic QUENCH input deck and to develop the two sets of input parameters, covering maximal and minimal ranges of uncertainties. These allow simulating different (but with the same nature tests, receiving calculation results with the evaluated range of uncertainties.

  17. Estimation of the gender pay gap in London and the UK - an econometric approach

    OpenAIRE

    Margarethe Theseira; Leticia Veruete-McKay

    2005-01-01

    We estimate the gender pay gap in London and the UK based on Labour Force Survey data 2002/03. Our approach decomposes the mean average wages of men and women into two parts (a) Differences in individual and job characteristics between men and women (such as age, number of children, qualification, ethnicity, region of residence, working in the public or private sector, working part-time or full-time, industry, occupation and size of company) (b) Unequal treatment and/or unexplained factors. S...

  18. Calculation of weighted averages approach for the estimation of ping tolerance values

    Science.gov (United States)

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  19. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    Science.gov (United States)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  20. Approach to estimation of level of information security at enterprise based on genetic algorithm

    Science.gov (United States)

    V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

    2018-05-01

    In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

  1. Monitoring multiple species: Estimating state variables and exploring the efficacy of a monitoring program

    Science.gov (United States)

    Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.

    2009-01-01

    Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.

  2. Estimating landholders' probability of participating in a stewardship program, and the implications for spatial conservation priorities.

    Directory of Open Access Journals (Sweden)

    Vanessa M Adams

    Full Text Available The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements--conservation covenants and management agreements--based on payment level and proportion of properties required to be managed. We then spatially predicted landholders' probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation.

  3. A geo-informatics approach for estimating water resources management components and their interrelationships

    KAUST Repository

    Liaqat, Umar Waqas

    2016-09-21

    A remote sensing based geo-informatics approach was developed to estimate water resources management (WRM) components across a large irrigation scheme in the Indus Basin of Pakistan. The approach provides a generalized framework for estimating a range of key water management variables and provides a management tool for the sustainable operation of similar schemes globally. A focus on the use of satellite data allowed for the quantification of relationships across a range of spatial and temporal scales. Variables including actual and crop evapotranspiration, net and gross irrigation, net and gross groundwater use, groundwater recharge, net groundwater recharge, were estimated and then their interrelationships explored across the Hakra Canal command area. Spatially distributed remotely sensed estimates of actual evapotranspiration (ETa) rates were determined using the Surface Energy Balance System (SEBS) model and evaluated against ground-based evaporation calculated from the advection-aridity method. Analysis of ETa simulations across two cropping season, referred to as Kharif and Rabi, yielded Pearson correlation (R) values of 0.69 and 0.84, Nash-Sutcliffe criterion (NSE) of 0.28 and 0.63, percentage bias of −3.85% and 10.6% and root mean squared error (RMSE) of 10.6 mm and 12.21 mm for each season, respectively. For the period of study between 2008 and 2014, it was estimated that an average of 0.63 mm day−1 water was supplied through canal irrigation against a crop water demand of 3.81 mm day−1. Approximately 1.86 mm day−1 groundwater abstraction was estimated in the region, which contributed to fulfil the gap between crop water demand and canal water supply. Importantly, the combined canal, groundwater and rainfall sources of water only met 70% of the crop water requirements. As such, the difference between recharge and discharge showed that groundwater depletion was around −115 mm year−1 during the six year study period. Analysis indicated that

  4. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  5. Indirect approach for estimation of forest degradation in non-intact dry forest

    DEFF Research Database (Denmark)

    Dons, Klaus; Bhattarai, Sushma; Meilby, Henrik

    2016-01-01

    Background Implementation of REDD+ requires measurement and monitoring of carbon emissions from forest degradation in developing countries. Dry forests cover about 40 % of the total tropical forest area, are home to large populations, and hence often display high disturbance levels....... They are susceptible to gradual but persistent degradation and monitoring needs to be low cost due to the low potential benefit from carbon accumulation per unit area. Indirect remote sensing approaches may provide estimates of subsistence wood extraction, but sampling of biomass loss produces zero-inflated continuous...... data that challenges conventional statistical approaches. We introduce the use of Tweedie Compound Poisson distributions from the exponential dispersion family with Generalized Linear Models (CPGLM) to predict biomass loss as a function of distance to nearest settlement in two forest areas in Tanzania...

  6. A Hierarchical Approach to Persistent Scatterer Network Construction and Deformation Time Series Estimation

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2014-12-01

    Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.

  7. An integrated approach to fire penetration seal program management

    International Nuclear Information System (INIS)

    Rispoli, R.D.

    1996-01-01

    This paper discusses the utilization of a P.C. based program to facilitate the management of Entergy Operations Arkansas Nuclear One (ANO) fire barrier penetration seal program. The computer program was developed as part of a streamlining process to consolidate all aspects of the ANO Penetration Seal Program under one system. The program tracks historical information related to each seal such as maintenance activities, design modifications and evaluations. The program is integrated with approved penetration seal design details which have been substantiated by full scale fire tests. This control feature is intended to prevent the inadvertent utilization of an unacceptable penetration detail in a field application which may exceed the parameters tested. The system is also capable of controlling the scope of the periodic surveillance of penetration seals by randomly selecting the inspection population and generating associated inspection forms. Inputs to the data base are required throughout the modification and maintenance process to ensure configuration control and maintain accurate data base information. These inputs are verified and procedurally controlled by Fire Protection Engineering (FPE) personnel. The implementation of this system has resulted in significant cost savings and has minimized the allocation of resources necessary to ensure long term program viability

  8. Reconnaissance Estimates of Recharge Based on an Elevation-dependent Chloride Mass-balance Approach

    Energy Technology Data Exchange (ETDEWEB)

    Charles E. Russell; Tim Minor

    2002-08-31

    Significant uncertainty is associated with efforts to quantity recharge in arid regions such as southern Nevada. However, accurate estimates of groundwater recharge are necessary to understanding the long-term sustainability of groundwater resources and predictions of groundwater flow rates and directions. Currently, the most widely accepted method for estimating recharge in southern Nevada is the Maxey and Eakin method. This method has been applied to most basins within Nevada and has been independently verified as a reconnaissance-level estimate of recharge through several studies. Recharge estimates derived from the Maxey and Eakin and other recharge methodologies ultimately based upon measures or estimates of groundwater discharge (outflow methods) should be augmented by a tracer-based aquifer-response method. The objective of this study was to improve an existing aquifer-response method that was based on the chloride mass-balance approach. Improvements were designed to incorporate spatial variability within recharge areas (rather than recharge as a lumped parameter), develop a more defendable lower limit of recharge, and differentiate local recharge from recharge emanating as interbasin flux. Seventeen springs, located in the Sheep Range, Spring Mountains, and on the Nevada Test Site were sampled during the course of this study and their discharge was measured. The chloride and bromide concentrations of the springs were determined. Discharge and chloride concentrations from these springs were compared to estimates provided by previously published reports. A literature search yielded previously published estimates of chloride flux to the land surface. {sup 36}Cl/Cl ratios and discharge rates of the three largest springs in the Amargosa Springs discharge area were compiled from various sources. This information was utilized to determine an effective chloride concentration for recharging precipitation and its associated uncertainty via Monte Carlo simulations

  9. Epidemiological and economic burden of Clostridium difficile in the United States: estimates from a modeling approach.

    Science.gov (United States)

    Desai, Kamal; Gupta, Swati B; Dubberke, Erik R; Prabhu, Vimalanand S; Browne, Chantelle; Mast, T Christopher

    2016-06-18

    Despite a large increase in Clostridium difficile infection (CDI) severity, morbidity and mortality in the US since the early 2000s, CDI burden estimates have had limited generalizability and comparability due to widely varying clinical settings, populations, or study designs. A decision-analytic model incorporating key input parameters important in CDI epidemiology was developed to estimate the annual number of initial and recurrent CDI cases, attributable and all-cause deaths, economic burden in the general population, and specific number of high-risk patients in different healthcare settings and the community in the US. Economic burden was calculated adopting a societal perspective using a bottom-up approach that identified healthcare resources consumed in the management of CDI. Annually, a total of 606,058 (439,237 initial and 166,821 recurrent) episodes of CDI were predicted in 2014: 34.3 % arose from community exposure. Over 44,500 CDI-attributable deaths in 2014 were estimated to occur. High-risk susceptible individuals representing 5 % of the total hospital population accounted for 23 % of hospitalized CDI patients. The economic cost of CDI was $5.4 billion ($4.7 billion (86.7 %) in healthcare settings; $725 million (13.3 %) in the community), mostly due to hospitalization. A modeling framework provides more comprehensive and detailed national-level estimates of CDI cases, recurrences, deaths and cost in different patient groups than currently available from separate individual studies. As new treatments for CDI are developed, this model can provide reliable estimates to better focus healthcare resources to those specific age-groups, risk-groups, and care settings in the US where they are most needed. (Trial Identifier ClinicaTrials.gov: NCT01241552).

  10. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  11. A Strategic Approach to Implementation of Medical Mentorship Programs.

    Science.gov (United States)

    Caruso, Thomas J; Steinberg, Diane H; Piro, Nancy; Walker, Kimberly; Blankenburg, Rebecca; Rassbach, Caroline; Marquez, Juan L; Katznelson, Laurence; Dohn, Ann

    2016-02-01

    Mentors influence medical trainees' experiences through career enhancement and psychosocial support, yet some trainees never receive benefits from involved mentors. Our goals were to examine the effectiveness of 2 interventions aimed at increasing the number of mentors in training programs, and to assess group differences in mentor effectiveness, the relationship between trainees' satisfaction with their programs given the presence of mentors, and the relationship between the number of trainees with mentors and postgraduate year (PGY). In group 1, a physician adviser funded by the graduate medical education department implemented mentorships in 6 residency programs, while group 2 involved a training program with funded physician mentoring time. The remaining 89 training programs served as controls. Chi-square tests were used to determine differences. Survey responses from group 1, group 2, and controls were 47 of 84 (56%), 34 of 78 (44%), and 471 of 981 (48%, P = .38), respectively. The percentages of trainees reporting a mentor in group 1, group 2, and the control group were 89%, 97%, and 79%, respectively (P = .01). There were no differences in mentor effectiveness between groups. Mentored trainees were more likely to be satisfied with their programs (P = .01) and to report that faculty supported their professional aspirations (P = .001). Across all programs, fewer first-year trainees (59%) identified a mentor compared to PGY-2 through PGY-8 trainees (84%, P program is an effective way to create an educational environment that maximizes trainees' perceptions of mentorship and satisfaction with their training programs.

  12. An approach for estimating toxic releases of H{sub 2}S-containing natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Jianwen, Zhang, E-mail: zhangjw@mail.buct.edu.cn [Lab of Fluid Flow and Heat Transfer, Beijing University of Chemical Technology, Beijing 100029 (China); Institute of Safety Management, Beijing University of Chemical Technology, Beijing 100029 (China); Da, Lei [Lab of Fluid Flow and Heat Transfer, Beijing University of Chemical Technology, Beijing 100029 (China); College of Mechanical and Electrical Engineering, Beijing University of Chemical Technology, Beijing 100029 (China); Wenxing, Feng [Pipeline Research Center of PetroChina Company Lmited, 51 Golden Road, Langfang 065000 (China)

    2014-01-15

    Highlights: • Behavior of H{sub 2}S-containing natural gas exhibits appearance of neutral gas by CFD. • The poisoning hazards of H{sub 2}S by gas pipeline releases are successfully estimated. • An assessment method for available safe egress time is proposed. -- Abstract: China is well known being rich in sulfurous natural gas with huge deposits widely distributed all over the country. Due to the toxic nature, the release of hydrogen sulfide-containing natural gas from the pipelines intends to impose serious threats to the human, society and environment around the release sources. CFD algorithm is adopted to simulate the dispersion process of gas, and the results prove that Gaussian plume model is suitable for determining the affected region of the well blowout of sulfide hydrogen-containing natural gas. In accordance with the analysis of release scenarios, the present study proposes a new approach for estimating the risk of hydrogen sulfide poisoning hazards, as caused by sulfide-hydrogen-containing natural gas releases. Historical accident-statistical data from the EGIG (European Gas Pipeline Incident Data Group) and the Britain Gas Transco are integrated into the approach. Also, the dose-load effect is introduced to exploit the hazards’ effects by two essential parameters – toxic concentration and exposure time. The approach was applied to three release scenarios occurring on the East-Sichuan Gas Transportation Project, and the individual risk and societal risk are classified and discussed. Results show that societal risk varies significantly with different factors, including population density, distance from pipeline, operating conditions and so on. Concerning the dispersion process of hazardous gas, available safe egress time was studied from the perspective of individual fatality risks. The present approach can provide reliable support for the safety management and maintenance of natural gas pipelines as well as evacuations that may occur after

  13. Some useful structures for categorical approach for program behavior

    Directory of Open Access Journals (Sweden)

    Viliam Slodičák

    2011-06-01

    Full Text Available Using of category theory in computer science has extremely grown in the last decade. Categories allow us to express mathematical structures in unified way. Algebras are used for constructing basic structures used in computer programs. A program can be considered as an element of the initial algebra arising from the used programming language. In our contribution we formulate two ways of expressing algebras in categories. We also construct the codomain functor from the arrow category of algebras into the base category of sets which objects are also the carrier-sets of the algebras. This functor expresses the relation between algebras and carrier-sets.

  14. A new approach to estimate nuclide ratios from measurements with activities close to background

    International Nuclear Information System (INIS)

    Kirchner, G.; Steiner, M.; Zaehringer, M.

    2009-01-01

    Measurements of low-level radioactivity often give results of the order of the detection limit. For many applications, interest is not only in estimating activity concentrations of a single radioactive isotope, but focuses on multi-isotope analyses, which often enable inference on the source of the activity detected (e.g. from activity ratios). Obviously, such conclusions become questionable if the measurement merely gives a detection limit for a specific isotope. This is particularly relevant if the presence of an isotope, which shows a low signal only (e.g. due to a short half-life or a small transition probability), is crucial for gaining the information of interest. This paper discusses a new approach which has the potential to solve these problems. Using Bayesian statistics, a method is presented which allows statistical inference on nuclide ratios taking into account both prior knowledge and all information collected from the measurements. It is shown that our method allows quantitative conclusion to be drawn if counts of single isotopes are low or become even negative after background subtraction. Differences to the traditional statistical approach of specifying decision thresholds or detection limits are highlighted. Application of this new approach is illustrated by a number of examples of environmental low-level radioactivity measurements. The capabilities of our approach for spectrum interpretation and source identification are demonstrated with real spectra from air filters, sewage sludge and soil samples.

  15. Estimates of future discharges of the river Rhine using two scenario methodologies: direct versus delta approach

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Simulations with a hydrological model for the river Rhine for the present (1960–1989 and a projected future (2070–2099 climate are discussed. The hydrological model (RhineFlow is driven by meteorological data from a 90-years (ensemble of three 30-years simulation with the HadRM3H regional climate model for both present-day and future climate (A2 emission scenario. Simulation of present-day discharges is realistic provided that (1 the HadRM3H temperature and precipitation are corrected for biases, and (2 the potential evapotranspiration is derived from temperature only. Different methods are used to simulate discharges for the future climate: one is based on the direct model output of the future climate run (direct approach, while the other is based on perturbation of the present-day HadRM3H time series (delta approach. Both methods predict a similar response in the mean annual discharge, an increase of 30% in winter and a decrease of 40% in summer. However, predictions of extreme flows differ significantly, with increases of 10% in flows with a return period of 100 years in the direct approach and approximately 30% in the delta approach. A bootstrap method is used to estimate the uncertainties related to the sample size (number of years simulated in predicting changes in extreme flows.

  16. Myth 8: The "Patch-On" Approach to Programming Is Effective

    Science.gov (United States)

    Tomlinson, Carol Ann

    2009-01-01

    It is not likely that any group of educators of the gifted ever sat around a table and came to the decision that a "patch-on" approach to programming for bright learners represented best practice. Nonetheless, it is as common today as 25 years ago that programming for students identified as gifted often represents such an approach. Patch-on…

  17. 76 FR 55673 - Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...

    Science.gov (United States)

    2011-09-08

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9460-8; Docket ID No. EPA-HQ-ORD-2011-0485] Vulnerability... titled, Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...) and Vulnerability Assessments in Support of the Climate Ready Estuaries Program: A Novel Approach...

  18. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  19. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential.

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A; Jolliet, Olivier; Georgopoulos, Panos G; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A; Vallero, Daniel A

    2013-08-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA's need to develop novel approaches and tools for rapidly prioritizing chemicals, a "Challenge" was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA's effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Comparison of modeling approaches to prioritize chemicals based on estimates of exposure and exposure potential

    Science.gov (United States)

    Mitchell, Jade; Arnot, Jon A.; Jolliet, Olivier; Georgopoulos, Panos G.; Isukapalli, Sastry; Dasgupta, Surajit; Pandian, Muhilan; Wambaugh, John; Egeghy, Peter; Cohen Hubal, Elaine A.; Vallero, Daniel A.

    2014-01-01

    While only limited data are available to characterize the potential toxicity of over 8 million commercially available chemical substances, there is even less information available on the exposure and use-scenarios that are required to link potential toxicity to human and ecological health outcomes. Recent improvements and advances such as high throughput data gathering, high performance computational capabilities, and predictive chemical inherency methodology make this an opportune time to develop an exposure-based prioritization approach that can systematically utilize and link the asymmetrical bodies of knowledge for hazard and exposure. In response to the US EPA’s need to develop novel approaches and tools for rapidly prioritizing chemicals, a “Challenge” was issued to several exposure model developers to aid the understanding of current systems in a broader sense and to assist the US EPA’s effort to develop an approach comparable to other international efforts. A common set of chemicals were prioritized under each current approach. The results are presented herein along with a comparative analysis of the rankings of the chemicals based on metrics of exposure potential or actual exposure estimates. The analysis illustrates the similarities and differences across the domains of information incorporated in each modeling approach. The overall findings indicate a need to reconcile exposures from diffuse, indirect sources (far-field) with exposures from directly, applied chemicals in consumer products or resulting from the presence of a chemical in a microenvironment like a home or vehicle. Additionally, the exposure scenario, including the mode of entry into the environment (i.e. through air, water or sediment) appears to be an important determinant of the level of agreement between modeling approaches. PMID:23707726