Generalized Agile Estimation Method
Shilpa Bahlerao
2011-01-01
Full Text Available Agile cost estimation process always possesses research prospects due to lack of algorithmic approaches for estimating cost, size and duration. Existing algorithmic approach i.e. Constructive Agile Estimation Algorithm (CAEA is an iterative estimation method that incorporates various vital factors affecting the estimates of the project. This method has lots of advantages but at the same time has some limitations also. These limitations may due to some factors such as number of vital factors and uncertainty involved in agile projects etc. However, a generalized agile estimation may generate realistic estimates and eliminates the need of experts. In this paper, we have proposed iterative Generalized Estimation Method (GEM and presented algorithm based on it for agile with case studies. GEM based algorithm various project domain classes and vital factors with prioritization level. Further, it incorporates uncertainty factor to quantify the risk of project for estimating cost, size and duration. It also provides flexibility to project managers for deciding on number of vital factors, uncertainty level and project domains thereby maintaining the agility.
Ashis De
2014-01-01
Full Text Available In this paper a detailed comparison between the estimation results of unknown inputs of a linear time invariant system using projection operator approach and using the method of generalized matrix inverse have been discussed. The full order observer constructed using projection operator approach has been extended and implemented for this purpose.
Raftery, Adrian E; Bao, Le
2010-12-01
The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data, and expert opinion. Initially, the posterior distribution was approximated by sampling-importance-resampling, which is simple to implement, easy to interpret, transparent to users, and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead incremental mixture importance sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data, it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this kind of problem.
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Generalized estimating equations
Hardin, James W
2013-01-01
Generalized Estimating Equations, Second Edition updates the best-selling previous edition, which has been the standard text on the subject since it was published a decade ago. Combining theory and application, the text provides readers with a comprehensive discussion of GEE and related models. Numerous examples are employed throughout the text, along with the software code used to create, run, and evaluate the models being examined. Stata is used as the primary software for running and displaying modeling output; associated R code is also given to allow R users to replicat
Chang, Seungwoo; Graham, Wendy D.; Hwang, Syewoon; Muñoz-Carpena, Rafael
2016-08-01
Projecting water deficit under various possible future climate scenarios depends on the choice of general circulation model (GCM), reference evapotranspiration (ET0) estimation method, and Representative Concentration Pathway (RCP) trajectory. The relative contribution of each of these factors must be evaluated in order to choose an appropriate ensemble of future scenarios for water resources planning. In this study variance-based global sensitivity analysis and Monte Carlo filtering were used to evaluate the relative sensitivity of projected changes in precipitation (P), ET0, and water deficit (defined here as P-ET0) to choice of GCM, ET0 estimation method, and RCP trajectory over the continental United States (US) for two distinct future periods: 2030-2060 (future period 1) and 2070-2100 (future period 2). A total of 9 GCMs, 10 ET0 methods, and 3 RCP trajectories were used to quantify the range of future projections and estimate the relative sensitivity of future projections to each of these factors. In general, for all regions of the continental US, changes in future precipitation are most sensitive to the choice of GCM, while changes in future ET0 are most sensitive to the choice of ET0 estimation method. For changes in future water deficit, the choice of GCM is the most influential factor in the cool season (December-March), and the choice of ET0 estimation method is most important in the warm season (May-October) for all regions except the Southeast US, where GCMs and ET0 have approximately equal influence throughout most of the year. Although the choice of RCP trajectory is generally less important than the choice of GCM or ET0 method, the impact of RCP trajectory increases in future period 2 over future period 1 for all factors. Monte Carlo filtering results indicate that particular GCMs and ET0 methods drive the projection of wetter or drier future conditions much more than RCP trajectory; however, the set of GCMs and ET0 methods that produce wetter or
COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS
Ricardo de Melo e Silva Accioly
2017-03-01
Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.
Partial Oblique Projection Learning for Optimal Generalization
LIU Benyong; ZHANG Jing
2004-01-01
In practice,it is necessary to implement an incremental and active learning for a learning method.In terms of such implementation,this paper shows that the previously discussed S-L projection learning is inappropriate to constructing a family of projection learning,and proposes a new version called partial oblique projection (POP) learning.In POP learning,a function space is decomposed into two complementary subspaces,so that functions belonging to one of the subspaces can be completely estimated in noiseless case;while in noisy case,the dispersions are set to be the smallest.In addition,a general form of POP learning is presented and the results of a simulation are given.
Modeling Uncertainty when Estimating IT Projects Costs
Winter, Michel; Mirbel, Isabelle; Crescenzo, Pierre
2014-01-01
In the current economic context, optimizing projects' cost is an obligation for a company to remain competitive in its market. Introducing statistical uncertainty in cost estimation is a good way to tackle the risk of going too far while minimizing the project budget: it allows the company to determine the best possible trade-off between estimated cost and acceptable risk. In this paper, we present new statistical estimators derived from the way IT companies estimate the projects' costs. In t...
Project Intrex: A General Review
Overhage, Carl F.; Reintjes, J. Francis
1974-01-01
A review of M.I.T.'s INformation TRansfer EXperiments (Project Intrex). The Intrex system includes an augmented catalog stored inan online interactive computer in combination with full-text storage on microfiche. There are discussions of details of the catalog structure, user experiments, economic studies, and information-system networking.…
Stagewise generalized estimating equations with grouped variables.
Vaughan, Gregory; Aseltine, Robert; Chen, Kun; Yan, Jun
2017-02-13
Forward stagewise estimation is a revived slow-brewing approach for model building that is particularly attractive in dealing with complex data structures for both its computational efficiency and its intrinsic connections with penalized estimation. Under the framework of generalized estimating equations, we study general stagewise estimation approaches that can handle clustered data and non-Gaussian/non-linear models in the presence of prior variable grouping structure. As the grouping structure is often not ideal in that even the important groups may contain irrelevant variables, the key is to simultaneously conduct group selection and within-group variable selection, that is, bi-level selection. We propose two approaches to address the challenge. The first is a bi-level stagewise estimating equations (BiSEE) approach, which is shown to correspond to the sparse group lasso penalized regression. The second is a hierarchical stagewise estimating equations (HiSEE) approach to handle more general hierarchical grouping structure, in which each stagewise estimation step itself is executed as a hierarchical selection process based on the grouping structure. Simulation studies show that BiSEE and HiSEE yield competitive model selection and predictive performance compared to existing approaches. We apply the proposed approaches to study the association between the suicide-related hospitalization rates of the 15-19 age group and the characteristics of the school districts in the State of Connecticut.
Generalized Line Spectral Estimation via Convex Optimization
Heckel, Reinhard; Soltanolkotabi, Mahdi
2016-01-01
Line spectral estimation is the problem of recovering the frequencies and amplitudes of a mixture of a few sinusoids from equispaced samples. However, in a variety of signal processing problems arising in imaging, radar, and localization we do not have access directly to such equispaced samples. Rather we only observe a severely undersampled version of these observations through linear measurements. This paper is about such generalized line spectral estimation problems. We reformulate these p...
A generalization error estimate for nonlinear systems
Larsen, Jan
1992-01-01
models of linear and simple neural network systems. Within the linear system GEN is compared to the final prediction error criterion and the leave-one-out cross-validation technique. It was found that the GEN estimate of the true generalization error is less biased on the average. It is concluded...
Projection-type estimation for varying coefficient regression models
Lee, Young K; Park, Byeong U; 10.3150/10-BEJ331
2012-01-01
In this paper we introduce new estimators of the coefficient functions in the varying coefficient regression model. The proposed estimators are obtained by projecting the vector of the full-dimensional kernel-weighted local polynomial estimators of the coefficient functions onto a Hilbert space with a suitable norm. We provide a backfitting algorithm to compute the estimators. We show that the algorithm converges at a geometric rate under weak conditions. We derive the asymptotic distributions of the estimators and show that the estimators have the oracle properties. This is done for the general order of local polynomial fitting and for the estimation of the derivatives of the coefficient functions, as well as the coefficient functions themselves. The estimators turn out to have several theoretical and numerical advantages over the marginal integration estimators studied by Yang, Park, Xue and H\\"{a}rdle [J. Amer. Statist. Assoc. 101 (2006) 1212--1227].
Estimation of food consumption. Hanford Environmental Dose Reconstruction Project
Callaway, J.M. Jr.
1992-04-01
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition
Yanai, Haruo; Takane, Yoshio
2011-01-01
Aside from distribution theory, projections and the singular value decomposition (SVD) are the two most important concepts for understanding the basic mechanism of multivariate analysis. The former underlies the least squares estimation in regression analysis, which is essentially a projection of one subspace onto another, and the latter underlies principal component analysis, which seeks to find a subspace that captures the largest variability in the original space. This book is about projections and SVD. A thorough discussion of generalized inverse (g-inverse) matrices is also given because
The General Hospital Colombo Rehabilitation Project.
Jayasuriya, L
1992-12-01
The General Hospital Colombo (GHC) Rehabilitation Project was to be implemented in 6 phases in about 25 years. The proposed funding was a grant of 100% from Finland for technical assistance and training, and 85% for investments. The development objective was to reinforce the status of the hospital as the apex of the medical care system. In Phase I (1985-1990) an 8 storeyed accident and orthopaedic services building with modern facilities has been commissioned. A water tower and a 'septic' operating theatre have been built. Infection control and maintenance organizations have been started. Phase I cost Rs.960 million. In the Bridging Phase, the existing six storeyed building is being renovated. Phase II has been drastically curtailed. It will concentrate on infrastructure development such as water supply, kitchen, stores and transport, and the construction of four new medical wards. The project will end in 1993.
Laser projection using generalized phase contrast
Glückstad, Jesper; Palima, Darwin; Rodrigo, Peter John
2007-01-01
We demonstrate experimental laser projection of a gray-level photographic image with 74% light efficiency using the generalized phase contrast (GPC) method. In contrast with a previously proposed technique [Alonzo et al., New J. Phys. 9, 132 (2007)], a new approach to image construction via GPC...... is introduced. An arbitrary phase shift filter eliminates the need for high-frequency modulation and conjugate phase encoding. This lowers device performance requirements and allows practical implementation with currently available dynamic spatial light modulators. (c) 2007 Optical Society of America....
The accuracy of general practitioner workforce projections.
Greuningen, M. van; Batenburg, R.S.; Velden, L.F.J. van der
2013-01-01
Background: Health workforce projections are important instruments to prevent imbalances in the health workforce. For both the tenability and further development of these projections, it is important to evaluate the accuracy of workforce projections. In the Netherlands, health workforce projections
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Lp estimates for-(e)-equation on generalized complex ellipsoids
无
2000-01-01
The estimate of a holomorphic supporting function for the generalized complex ellipsoid in is Cn given, This domain is not decoupled. By using this estimate, the best possible Lp estimates for the -(e)-equation and some results of function theory on generalized complex ellipsoids are proved.
Generalized shrunken type-GM estimator and its application
Ma, C. Z.; Du, Y. L.
2014-03-01
The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.
A New Schedule Estimation Technique for Construction Projects
D. H. Warburton, Roger
2014-01-01
Allen studied hundreds of construction projects and developed an accu-rate, practically useful model of their labor profiles. We combine Al-len’s labor profile with standard Earned Value Management (EVM) techniques and derive a simple, practical formula that estimates the fi-nal schedule from early project data. The schedule estimation formula is exact; it requires no approximations. The estimate is also surprisingly accurate and available early enough in the project for the project manager ...
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
10 CFR 603.560 - Estimate of project expenditures.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Estimate of project expenditures. 603.560 Section 603.560... Business Evaluation Fixed-Support Or Expenditure-Based Approach § 603.560 Estimate of project expenditures... have confidence in the estimate of the expenditures required to achieve well-defined...
General parity-odd CMB bispectrum estimation
Shiraishi, Maresuke; Fergusson, James R
2014-01-01
We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\\ell_1 + \\ell_2 + \\ell_3 = {\\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Adaptive Error Estimation in Linearized Ocean General Circulation Models
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
A generalized carrier frequency offset estimator for uplink OFDMA
Nguyen, Huan Cong; De Carvalho, Elisabeth; Prasad, Ramjee
This paper proposes a generalized carrier frequency offset (CFO) estimator for the uplink of orthogonal frequency division multiple access (OFDMA) wireless systems. Using the maximum likelihood criterion, the estimator estimates CFOs using the phase shift between two observation windows at distance...
On Parameters Estimation of Lomax Distribution under General Progressive Censoring
Bander Al-Zahrani
2013-01-01
Full Text Available We consider the estimation problem of the probability S=P(Y
The accuracy of General Practitioner workforce projections.
Greuningen, M. van; Batenburg, R.; Velden, L. van der
2013-01-01
Context: Health workforce projections are important to prevent imbalances in the health workforce. Matrix Insight provided an overview of health workforce planning in the EU, which shows that 13 countries are engaged in model-based workforce planning using workforce projections. However, in most cas
Abdollahian Vahed
2016-12-01
Full Text Available Without a doubt optimum performance is possible in the context of a planned development project organized failure to comply with a standard patterndoes not guarantee when a project based on financial forecasts and with the required quality. in this study, based on the standards of project management enumerating the factors for implementation of a project the study is the flagship project of Ilam and finally the implementation of the project as stated in the standard are investigated. in this research project emphasizing the preliminary estimates (estimates and time and workload has been considered and compared to the existing. according to the research findings in 83 percent of projects, project was not according to preliminary financial estimates. 100% of projects have been wrong to predict the time and projects have progressed according to preliminary forecast and in 66 percent of projects supporting private sector investment and hinder the progress of the project is appropriate and the project framework was unchanged.
Simulator for Software Project Reliability Estimation
Sanjana,
2011-07-01
Full Text Available Several models are there for software development processes, each describing approaches to a variety of tasks or activities that take place during the process. Without project management, softwareprojects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.IEEE defines reliability as “the ability of a system to perform its required function under stated conditions for a specified period of time. To most software project managers, reliability is equated to correctness that is number of bugs found and fixed. The purpose is to develop a simulator forestimating the reliability of the software project using PERT approach keeping in view the criticality index of each task.
Adaptive Flight Envelope Estimation and Protection Project
National Aeronautics and Space Administration — Impact Technologies, in collaboration with the Georgia Institute of Technology, proposes to develop and demonstrate an innovative flight envelope estimation and...
Structures of generalized 3-circular projections for symmetric norms
A B Abubaker; S Dutta
2016-05-01
Recently several authors investigated structures of generalized bi-circular projections in spaces where the descriptions of the group of surjective isometries are known. Following the same idea in this paper we give complete descriptions of generalized 3-circular projections for symmetric norms on ${\\mathbb C}^n$ and ${\\mathbb M}_{m \\times n}({\\mathbb C})$.
Generalized Reduced Order Model Generation Project
National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...
Adaptive quasi-likelihood estimate in generalized linear models
CHEN Xia; CHEN Xiru
2005-01-01
This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
Bootstrap consistency for general semiparametric M-estimation
Cheng, Guang
2010-10-01
Consider M-estimation in a semiparametric model that is characterized by a Euclidean parameter of interest and an infinite-dimensional nuisance parameter. As a general purpose approach to statistical inferences, the bootstrap has found wide applications in semiparametric M-estimation and, because of its simplicity, provides an attractive alternative to the inference approach based on the asymptotic distribution theory. The purpose of this paper is to provide theoretical justifications for the use of bootstrap as a semiparametric inferential tool. We show that, under general conditions, the bootstrap is asymptotically consistent in estimating the distribution of the M-estimate of Euclidean parameter; that is, the bootstrap distribution asymptotically imitates the distribution of the M-estimate. We also show that the bootstrap confidence set has the asymptotically correct coverage probability. These general onclusions hold, in particular, when the nuisance parameter is not estimable at root-n rate, and apply to a broad class of bootstrap methods with exchangeable ootstrap weights. This paper provides a first general theoretical study of the bootstrap in semiparametric models. © Institute of Mathematical Statistics, 2010.
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
How Agile Methods Conquers General Project Management - The Project Half Double Initiative
Tordrup Heeager, Lise; Svejvig, Per; Schlichter, Bjarne Rerup
2016-01-01
Increased complexity in projects has forced new project management initiatives. In software development several agile methods have emerged and methods such as Scrum are today highly implemented in practice. General project management practice has been inspired by agile software development....... But in order to fully understand and to provide suggestions for future practice on how agility can be incorporated in general project management, this paper addresses how agile methods have inspired general project management practices. To answer the research question, the paper provides an analysis which...... compares ten characteristics of agile software development (identified in theory) and the general project management method developed by the Danish Project Half Double (PHD) initiative. The method consists of 10 leading stars for rethinking project management the impact, flow and leadership (ILF) method...
Estimating the capital cost of underground car parking projects
Bastos, Mónica; Ribeiro, F. Loforte; Teixeira, José M. Cardoso
2005-01-01
Underground parking projects are expansive. The capital cost of underground parking project has been, and still is, one of the promoter's main economic concerns. Therefore, the capital cost estimation is an essential taskin the early stages pf underground parking projcts. In this context, the promoters mainly use cost estimation models, most of them produced by methodologies with lack of precision and with low performances. Over the last yeras Portugal has embarked ona a large programme of...
Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study
Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo
2000-09-01
If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work of SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.
Estimation linear model using block generalized inverse of a matrix
Jasińska, Elżbieta; Preweda, Edward
2013-01-01
The work shows the principle of generalized linear model, point estimation, which can be used as a basis for determining the status of movements and deformations of engineering objects. The structural model can be put on any boundary conditions, for example, to ensure the continuity of the deformations. Estimation by the method of least squares was carried out taking into account the terms and conditions of the Gauss- Markov for quadratic forms stored using Lagrange function. The original sol...
Cost Estimation for Cross-organizational ERP Projects: Research Perspectives
Daneva, M.; Wieringa, R.J.; Bieman, J.
2008-01-01
There are many methods for estimating size, effort, schedule and other cost aspects of IS projects, but only one specifically developed for Enterprise Resource Planning (ERP) [67] and none for simultaneous, interdependent ERP projects in a cross-organizational context. The objective of this paper is
Cost Estimation for Cross-organizational ERP Projects: Research Perspectives
Daneva, Maia; Bieman, J.; Wieringa, Roelf J.
There are many methods for estimating size, effort, schedule and other cost aspects of IS projects, but only one specifically developed for Enterprise Resource Planning (ERP) [67] and none for simultaneous, interdependent ERP projects in a cross-organizational context. The objective of this paper is
The cluster bootstrap consistency in generalized estimating equations
Cheng, Guang
2013-03-01
The cluster bootstrap resamples clusters or subjects instead of individual observations in order to preserve the dependence within each cluster or subject. In this paper, we provide a theoretical justification of using the cluster bootstrap for the inferences of the generalized estimating equations (GEE) for clustered/longitudinal data. Under the general exchangeable bootstrap weights, we show that the cluster bootstrap yields a consistent approximation of the distribution of the regression estimate, and a consistent approximation of the confidence sets. We also show that a computationally more efficient one-step version of the cluster bootstrap provides asymptotically equivalent inference. © 2012.
Estimating phase integrals - A generalization of Russell's law
Verbiscer, Anne J.; Veverka, Joseph
1988-01-01
An attempt is made to demonstrate, by means of Hapke's (1981) photometric function, that a simple and reliable method exists for the estimation of phase integrals from limited higher phase angle measurements. This method is a generalization of the approximation first proposed by Russell (1916) and more recently treated by Veverka (1971). It is shown that this generalization of Russell's law can employ observations anywhere in the range of phase angles from 40 to 90 deg; optimum estimates of q will be obtained if the data near 70 deg are obtainable.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Optimizing Neural Network Architectures Using Generalization Error Estimators
Larsen, Jan
1994-01-01
This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...
Optimizing Neural Network Architectures Using Generalization Error Estimators
Larsen, Jan
1994-01-01
This paper addresses the optimization of neural network architectures. It is suggested to optimize the architecture by selecting the model with minimal estimated averaged generalization error. We consider a least-squares (LS) criterion for estimating neural network models, i.e., the associated...... neural network applications, it is impossible to suggest a perfect model, and consequently the ability to handle incomplete models is urgent. A concise derivation of the GEN-estimator is provided, and its qualities are demonstrated by comparative numerical studies...
Parameter Estimation for Generalized Brownian Motion with Autoregressive Increments
Fendick, Kerry
2011-01-01
This paper develops methods for estimating parameters for a generalization of Brownian motion with autoregressive increments called a Brownian ray with drift. We show that a superposition of Brownian rays with drift depends on three types of parameters - a drift coefficient, autoregressive coefficients, and volatility matrix elements, and we introduce methods for estimating each of these types of parameters using multidimensional times series data. We also cover parameter estimation in the contexts of two applications of Brownian rays in the financial sphere: queuing analysis and option valuation. For queuing analysis, we show how samples of queue lengths can be used to estimate the conditional expectation functions for the length of the queue and for increments in its net input and lost potential output. For option valuation, we show how the Black-Scholes-Merton formula depends on the price of the security on which the option is written through estimates not only of its volatility, but also of a coefficient ...
Computational Aeroacoustics Using the Generalized Lattice Boltzmann Equation Project
National Aeronautics and Space Administration — The overall objective of the proposed project is to develop a generalized lattice Boltzmann (GLB) approach as a potential computational aeroacoustics (CAA) tool for...
Developing a took for project contingency estimation in a large portfolio of construction projects
Van Niekerk, Mariette
2014-11-01
Full Text Available To enable the management of project-related risk on a portfolio level in an owner organisation, project contingency estimation should be performed consistently and objectively. This article discusses the development of a contingency estimation tool for a large portfolio that contains similar construction projects. The purpose of developing this tool is to decrease the influence of subjectivity on contingency estimation methods throughout the project life cycle, thereby enabling consistent reflection on project risk at the portfolio level. Our research contribution is the delivery of a hybrid tool that incorporates both neural network modelling of systemic risks and expected value analysis of project-specific risks. The neural network is trained using historical project data, supported by data obtained from interviews with project managers. Expected value analysis is achieved in a risk register format employing a binomial distribution to estimate the number of risks expected. By following this approach, the contingency estimation tool can be used without expert knowledge of project risk management. In addition, this approach can provide contingency cost and duration output on a project level, and it contains both systemic and project-specific risks in a single tool.
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Estimating parameters for generalized mass action models with connectivity information
Voit Eberhard O
2009-05-01
Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out
A universal projective synchronization of general autonomous chaotic system
Fuzhong Nian; Zingyuan Wang; Ming Li; Ge Guo
2012-12-01
This paper investigates the generalized projective synchronization in general autonomous chaotic system. A universal controller is designed and the effectiveness is verified via theoretical analysis and numerical simulations. The controller design is irrelevant to concrete system structure and initial values. It has strong robustness and broad application perspective.
Wegner Estimate for Sparse and other Generalized Alloy Type Potentials
Werner Kirsch; Ivan Veselić
2002-02-01
We prove a Wegner estimate for generalized alloy type models at negative energies (Theorems 8 and 13). The single site potential is assumed to be non-positive. The random potential does not need to be stationary with respect to translations from a lattice. Actually, the set of points to which the individual single site potentials are attached, needs only to satisfy a certain density condition. The distribution of the coupling constants is assumed to have a bounded density only in the energy region where we prove the Wegner estimate.
Unbiased pseudo-Cl power spectrum estimation with mode projection
Elsner, Franz; Peiris, Hiranya V
2016-01-01
With the steadily improving sensitivity afforded by current and future galaxy surveys, a robust extraction of two-point correlation function measurements may become increasingly hampered by the presence of astrophysical foregrounds or observational systematics. The concept of mode projection has been introduced as a means to remove contaminants for which it is possible to construct a spatial map reflecting the expected signal contribution. Owing to its computational efficiency compared to minimum-variance methods, the sub-optimal pseudo-Cl (PCL) power spectrum estimator is a popular tool for the analysis of high-resolution data sets. Here, we integrate mode projection into the framework of PCL power spectrum estimation. In contrast to results obtained with optimal estimators, we show that the uncorrected projection of template maps leads to biased power spectra. Based on analytical calculations, we find exact closed-form expressions for the expectation value of the bias and demonstrate that they can be recast...
An Optimized Analogy-Based Project Effort Estimation
Mohammad Azzeh
2014-05-01
Full Text Available despite the predictive performance of Analogy-Based Estimation (ABE in generating better effort estimates, there is no consensus on: (1 how to predetermine the appropriate number of analogies, (2 which adjustment technique produces better estimates. Yet, there is no prior works attempted to optimize both number of analogies and feature distance weights for each test project. Perhaps rather than using fixed number, it is better to optimize this value for each project individually and then adjust the retrieved analogies by optimizing and approximating complex relationships between features and reflects that approximation on the final estimate. The Artificial Bees Algorithm is utilized to find, for each test project, the appropriate number of closest projects and features distance weights that is used to adjust those analogies’ efforts. The proposed technique has been applied and validated to 8 publically datasets from PROMISE repository. Results obtained show that: (1 the predictive performance of ABE has noticeably been improved, (2 the number of analogies was remarkably variable for each test project. While there are many techniques to adjust ABE, Using optimization algorithm provides two solutions in one technique and appeared useful for datasets with complex structure.
Unbiased pseudo-Cℓ power spectrum estimation with mode projection
Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.
2017-02-01
With the steadily improving sensitivity afforded by current and future galaxy surveys, a robust extraction of two-point correlation function measurements may become increasingly hampered by the presence of astrophysical foregrounds or observational systematics. The concept of mode projection has been introduced as a means to remove contaminants for which it is possible to construct a spatial map, reflecting the expected signal contribution. Owing to its computational efficiency compared to minimum-variance methods, the sub-optimal pseudo-Cℓ (PCL) power spectrum estimator is a popular tool for the analysis of high-resolution data sets. Here, we integrate mode projection into the framework of PCL power spectrum estimation. In contrast to results obtained with optimal estimators, we show that the uncorrected projection of template maps leads to biased power spectra. Based on analytical calculations, we find exact closed-form expressions for the expectation value of the bias and demonstrate that they can be recast in a form which allows a numerically efficient evaluation, preserving the favourable O( ℓ_{max} ^3 ) time complexity of PCL estimator algorithms. Using simulated data sets, we assess the scaling of the bias with various analysis parameters and demonstrate that it can be reliably removed. We conclude that in combination with mode projection, PCL estimators allow for a fast and robust computation of power spectra in the presence of systematic effects - properties in high demand for the analysis of ongoing and future large-scale structure surveys.
IMPROVING PROJECT SCHEDULE ESTIMATES USING HISTORICAL DATA AND SIMULATION
P.H. Meyer
2012-01-01
Full Text Available
ENGLISH ABSTRACT: Many projects are not completed on time or within the original budget. This is caused by uncertainty in project variables as well as the occurrence of risk events. A study was done to determine ways of measuring the risk in development projects executed by a mining company in South Africa. The main objective of the study was to determine whether historical project data would provide a more accurate means of estimating the total project duration. Original estimates and actual completion times for tasks of a number of projects were analysed and compared. The results of the study indicated that a more accurate total duration for a project could be obtained by making use of historical project data. The accuracy of estimates could be improved further by building a comprehensive project schedule database within a specific industry.
AFRIKAANSE OPSOMMING: Verskeie projekte word nie binne die oorspronklike skedule of begroting voltooi nie. Dit word dikwels veroorsaak deur onsekerheid oor projekveranderlikes en die voorkoms van risiko’s. 'n Studie is gedoen om 'n metode te ontwikkel om risiko te meet vir ontwikkelingsprojekte van 'n mynmaatskappy in Suid Afrika. Die hoofdoel van die studie was om te bepaal of historiese projekdata gebruik kon word om 'n akkurater tydsduur vir 'n projek te beraam. Die geraamde tydsduur van take vir 'n aantal projekte is ontleed en vergelyk met die werklike tydsduur. Die resultate van die studie het getoon dat 'n akkurater totale tydsduur vir die projek verkry kon word deur gebruik te maak van historiese projekdata. Die akkuraatheid kan verder verbeter word deur 'n databasis van projekskedules vir 'n bepaalde industrie te ontwikkel en by datum te hou.
General model selection estimation of a periodic regression with a Gaussian noise
Konev, Victor; 10.1007/s10463-008-0193-1
2010-01-01
This paper considers the problem of estimating a periodic function in a continuous time regression model with an additive stationary gaussian noise having unknown correlation function. A general model selection procedure on the basis of arbitrary projective estimates, which does not need the knowledge of the noise correlation function, is proposed. A non-asymptotic upper bound for quadratic risk (oracle inequality) has been derived under mild conditions on the noise. For the Ornstein-Uhlenbeck noise the risk upper bound is shown to be uniform in the nuisance parameter. In the case of gaussian white noise the constructed procedure has some advantages as compared with the procedure based on the least squares estimates (LSE). The asymptotic minimaxity of the estimates has been proved. The proposed model selection scheme is extended also to the estimation problem based on the discrete data applicably to the situation when high frequency sampling can not be provided.
The Process to Estimate Economical Benefits of Six Sigma Projects
Jan Kosina
2013-07-01
Full Text Available This paper seeks to define the process for the continuous evaluation of the financial benefits during Six Sigma project life time. The financial criteria are critical success factors of a Six Sigma project. The process has been developed as part of the six sigma project monitoring in order to estimate proper allocation of the resources taking in account the expected project benefits as well as evaluationof real achievements. The evaluation of the finacial benefits based on the quality costs is not sufficient in the real life and has to be accomplished with key financial performance indicators of the business to visualize the results. The evaluation based on the savings seems to be too difficult especially for green belts. The early involvement of the finance department in the project definition as well as ongoing evaluation is key. The defined process has been applied to real business enviroment.
Bayesian estimation of generalized exponential distribution under noninformative priors
Moala, Fernando Antonio; Achcar, Jorge Alberto; Tomazella, Vera Lúcia Damasceno
2012-10-01
The generalized exponential distribution, proposed by Gupta and Kundu (1999), is a good alternative to standard lifetime distributions as exponential, Weibull or gamma. Several authors have considered the problem of Bayesian estimation of the parameters of generalized exponential distribution, assuming independent gamma priors and other informative priors. In this paper, we consider a Bayesian analysis of the generalized exponential distribution by assuming the conventional noninformative prior distributions, as Jeffreys and reference prior, to estimate the parameters. These priors are compared with independent gamma priors for both parameters. The comparison is carried out by examining the frequentist coverage probabilities of Bayesian credible intervals. We shown that maximal data information prior implies in an improper posterior distribution for the parameters of a generalized exponential distribution. It is also shown that the choice of a parameter of interest is very important for the reference prior. The different choices lead to different reference priors in this case. Numerical inference is illustrated for the parameters by considering data set of different sizes and using MCMC (Markov Chain Monte Carlo) methods.
A fast alternating projection method for complex frequency estimation
Andersson, Fredrik; Ivert, Per-Anders
2011-01-01
The problem of approximating a sampled function using sums of a fixed number of complex exponentials is considered. We use alternating projections between fixed rank matrices and Hankel matrices to obtain such an approximation. Convergence, convergence rates and error estimates for this technique are proven, and fast algorithms are developed. We compare the numerical results obtain with the MUSIC and ESPRIT methods.
Generalized linear model for estimation of missing daily rainfall data
Rahman, Nurul Aishah; Deni, Sayang Mohd; Ramli, Norazan Mohamed
2017-04-01
The analysis of rainfall data with no missingness is vital in various applications including climatological, hydrological and meteorological study. The issue of missing data is a serious concern since it could introduce bias and lead to misleading conclusions. In this study, five imputation methods including simple arithmetic average, normal ratio method, inverse distance weighting method, correlation coefficient weighting method and geographical coordinate were used to estimate the missing data. However, these imputation methods ignored the seasonality in rainfall dataset which could give more reliable estimation. Thus this study is aimed to estimate the missingness in daily rainfall data by using generalized linear model with gamma and Fourier series as the link function and smoothing technique, respectively. Forty years daily rainfall data for the period from 1975 until 2014 which consists of seven stations at Kelantan region were selected for the analysis. The findings indicated that the imputation methods could provide more accurate estimation values based on the least mean absolute error, root mean squared error and coefficient of variation root mean squared error when seasonality in the dataset are considered.
Maximum estimates for generalized Forchheimer flows in heterogeneous porous media
Celik, Emine; Hoang, Luan
2017-02-01
This article continues the study in [4] of generalized Forchheimer flows in heterogeneous porous media. Such flows are used to account for deviations from Darcy's law. In heterogeneous media, the derived nonlinear partial differential equation for the pressure can be singular and degenerate in the spatial variables, in addition to being degenerate for large pressure gradient. Here we obtain the estimates for the L∞-norms of the pressure and its time derivative in terms of the initial and the time-dependent boundary data. They are established by implementing De Giorgi-Moser's iteration in the context of weighted norms with the weights specifically defined by the Forchheimer equation's coefficient functions. With these weights, we prove suitable weighted parabolic Poincaré-Sobolev inequalities and use them to facilitate the iteration. Moreover, local in time L∞-bounds are combined with uniform Gronwall-type energy inequalities to obtain long-time L∞-estimates.
General Projective Synchronization and Fractional Order Chaotic Masking Scheme
Shi-Quan Shao
2008-01-01
In this paper, a fractional order chaotic masking scheme used for secure communication is introduced. Based on the general projective synchronization of two coupled fractional Chert systems, a popular masking scheme is designed. Numerical example is given to demonstrate the effectiveness of the proposed method.
W. J. Pienaar
2005-09-01
Full Text Available This article identifies the possible development benefits than can emanate from economically justified road construction projects. It shows how the once-off increase in regional income resulting from investment in road construction projects, and the recurring additional regional income resulting from the use of new or improved roads can be estimated. The difference is shown that exists between a cost-benefit analysis (to determine how economically justified a project is and a regional economic income analysis (to estimate the general economic benefits that will be developed by investment in and usage of a road. Procedures are proposed through which the once-off and recurring increases in regional income can be estimated by using multiplier and accelerator analyses respectively. Finally guidelines are supplied on the appropriate usage of input variables in the calculation of the regional income multiplier.
Penalized maximum likelihood estimation for generalized linear point processes
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Filling-Based Techniques Applied to Object Projection Feature Estimation
Quesada, Luis
2012-01-01
3D motion tracking is a critical task in many computer vision applications. Unsupervised markerless 3D motion tracking systems determine the most relevant object in the screen and then track it by continuously estimating its projection features (center and area) from the edge image and a point inside the relevant object projection (namely, inner point), until the tracking fails. Existing object projection feature estimation techniques are based on ray-casting from the inner point. These techniques present three main drawbacks: when the inner point is surrounded by edges, rays may not reach other relevant areas; as a consequence of that issue, the estimated features may greatly vary depending on the position of the inner point relative to the object projection; and finally, increasing the number of rays being casted and the ray-casting iterations (which would make the results more accurate and stable) increases the processing time to the point the tracking cannot be performed on the fly. In this paper, we anal...
Detection-Guided Fast Affine Projection Channel Estimator for Speech Applications
Yan Wu Jennifer
2007-04-01
Full Text Available In various adaptive estimation applications, such as acoustic echo cancellation within teleconferencing systems, the input signal is a highly correlated speech. This, in general, leads to extremely slow convergence of the NLMS adaptive FIR estimator. As a result, for such applications, the affine projection algorithm (APA or the low-complexity version, the fast affine projection (FAP algorithm, is commonly employed instead of the NLMS algorithm. In such applications, the signal propagation channel may have a relatively low-dimensional impulse response structure, that is, the number m of active or significant taps within the (discrete-time modelled channel impulse response is much less than the overall tap length n of the channel impulse response. For such cases, we investigate the inclusion of an active-parameter detection-guided concept within the fast affine projection FIR channel estimator. Simulation results indicate that the proposed detection-guided fast affine projection channel estimator has improved convergence speed and has lead to better steady-state performance than the standard fast affine projection channel estimator, especially in the important case of highly correlated speech input signals.
Projected metastable Markov processes and their estimation with observable operator models
Wu, Hao, E-mail: hao.wu@fu-berlin.de; Prinz, Jan-Hendrik, E-mail: jan-hendrik.prinz@fu-berlin.de; Noé, Frank, E-mail: frank.noe@fu-berlin.de [DFG Research Center Matheon, Free University Berlin, Arnimallee 6, 14195 Berlin (Germany)
2015-10-14
The determination of kinetics of high-dimensional dynamical systems, such as macromolecules, polymers, or spin systems, is a difficult and generally unsolved problem — both in simulation, where the optimal reaction coordinate(s) are generally unknown and are difficult to compute, and in experimental measurements, where only specific coordinates are observable. Markov models, or Markov state models, are widely used but suffer from the fact that the dynamics on a coarsely discretized state spaced are no longer Markovian, even if the dynamics in the full phase space are. The recently proposed projected Markov models (PMMs) are a formulation that provides a description of the kinetics on a low-dimensional projection without making the Markovianity assumption. However, as yet no general way of estimating PMMs from data has been available. Here, we show that the observed dynamics of a PMM can be exactly described by an observable operator model (OOM) and derive a PMM estimator based on the OOM learning.
A modified EM algorithm for estimation in generalized mixed models.
Steele, B M
1996-12-01
Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.
Software project effort estimation foundations and best practice guidelines for success
Trendowicz, Adam
2014-01-01
Software effort estimation is one of the oldest and most important problems in software project management, and thus today there are a large number of models, each with its own unique strengths and weaknesses in general, and even more importantly, in relation to the environment and context in which it is to be applied.Trendowicz and Jeffery present a comprehensive look at the principles of software effort estimation and support software practitioners in systematically selecting and applying the most suitable effort estimation approach. Their book not only presents what approach to take and how
A General Model for Cost Estimation in an Exchange
Benzion Barlev
2014-03-01
Full Text Available Current Generally Accepted Accounting Principles (GAAP state that the cost of an asset acquired for cash is the fair value (FV of the amount surrendered, and that of an asset acquired in a non-monetary exchange is the FV of the asset surrendered or, if it is more “clearly evident,” the FV of the acquired asset. The measurement method prescribed for a non-monetary exchange ignores valuable information about the “less clearly evident” asset. Thus, we suggest that the FV in any exchange be measured by the weighted average of the exchanged assets’ FV estimations, where the weights are the inverse of the variances’ estimations. This alternative valuation process accounts for the uncertainty involved in estimating the FV of each of the asset in the exchange. The proposed method suits all types of exchanges: monetary and non-monetary. In a monetary transaction, the weighted average equals the cash paid because the variance of its FV is nil.
Discriminating Projections for Estimating Face Age in Wild Images
Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL
2014-01-01
We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.
Nonconforming local projection stabilization for generalized Oseen equations
Yan-hong BAI; Min-fu FENG; Chuan-long WANG
2010-01-01
A new method of nonconforming local projection stabilization for the generalized Oseen equations is proposed by a nonconforming inf-sup stable element pair for approximating the velocity and the pressure.The method has several attractive features.It adds a local projection term only on the sub-scale(H ≥ h).The stabilized term is simple compared with the residual-free bubble element method.The method can handle the influence of strong convection.The numerical results agree with the theoretical expectations very well.
Methods for cost estimation in software project management
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Generalized projection retrieval of dispersion scans for ultrashort pulse characterization
Miranda, Miguel; Guo, Chen; Harth, Anne; Louisy, Maite; Neoricic, Lana; L'Huillier, Anne; Arnold, Cord L
2016-01-01
We present a retrieval algorithm based on generalized projections for ultrashort pulse characterization using dispersion scan (d-scan). The new algorithm is tested on several simulated cases and in two different experimental cases in the few-cycle regime. The proposed algorithm is much faster and leads to a drastic reduction of retrieval times, but performs less robust in the retrieval of noisy d-scan traces compared to the standard algorithm.
Doubly robust and multiple-imputation-based generalized estimating equations.
Birhanu, Teshome; Molenberghs, Geert; Sotto, Cristina; Kenward, Michael G
2011-03-01
Generalized estimating equations (GEE), proposed by Liang and Zeger (1986), provide a popular method to analyze correlated non-Gaussian data. When data are incomplete, the GEE method suffers from its frequentist nature and inferences under this method are valid only under the strong assumption that the missing data are missing completely at random. When response data are missing at random, two modifications of GEE can be considered, based on inverse-probability weighting or on multiple imputation. The weighted GEE (WGEE) method involves weighting observations by the inverse of their probability of being observed. Imputation methods involve filling in missing observations with values predicted by an assumed imputation model, multiple times. The so-called doubly robust (DR) methods involve both a model for the weights and a predictive model for the missing observations given the observed ones. To yield consistent estimates, WGEE needs correct specification of the dropout model while imputation-based methodology needs a correctly specified imputation model. DR methods need correct specification of either the weight or the predictive model, but not necessarily both. Focusing on incomplete binary repeated measures, we study the relative performance of the singly robust and doubly robust versions of GEE in a variety of correctly and incorrectly specified models using simulation studies. Data from a clinical trial in onychomycosis further illustrate the method.
Error-space estimate method for generalized synergic target tracking
Ming CEN; Chengyu FU; Ke CHEN; Xingfa LIU
2009-01-01
To improve the tracking accuracy and stability of an optic-electronic target tracking system,the concept of generalized synergic target and an algorithm named error-space estimate method is presented.In this algo-rithm,the motion of target is described by guide data and guide errors,and then the maneuver of the target is separated into guide data and guide errors to reduce the maneuver level.Then state estimate is implemented in target state-space and error-space respectively,and the prediction data of target position are acquired by synthe-sizing the filtering data from target state-space according to kinematic model and the prediction data from error-space according to guide error model.Differing from typ-ical multi-model method,the kinematic and guide error models work concurrently rather than switch between models.Experiment results show that the performance of the algorithm is better than Kalman filter and strong tracking filter at the same maneuver level.
Generalized estimators of avian abundance from count survey data
Royle, J. A.
2004-01-01
Full Text Available I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture-recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
A projection-based approach to general-form Tikhonov regularization
Kilmer, Misha E.; Hansen, Per Christian; Espanol, Malena I.
2007-01-01
We present a projection-based iterative algorithm for computing general-form Tikhonov regularized solutions to the problem minx| Ax-b |2^2+lambda2| Lx |2^2, where the regularization matrix L is not the identity. Our algorithm is designed for the common case where lambda is not known a priori....... It is based on a joint bidiagonalization algorithm and is appropriate for large-scale problems when it is computationally infeasible to transform the regularized problem to standard form. By considering the projected problem, we show how estimates of the corresponding optimal regularization parameter can...
Generalized and synthetic regression estimators for randomized branch sampling
David L. R. Affleck; Timothy G. Gregoire
2015-01-01
In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...
Project management under uncertainty beyond beta: The generalized bicubic distribution
José García Pérez
2016-01-01
Full Text Available The beta distribution has traditionally been employed in the PERT methodology and generally used for modeling bounded continuous random variables based on expert’s judgment. The impossibility of estimating four parameters from the three values provided by the expert when the beta distribution is assumed to be the underlying distribution has been widely debated. This paper presents the generalized bicubic distribution as a good alternative to the beta distribution since, when the variance depends on the mode, the generalized bicubic distribution approximates the kurtosis of the Gaussian distribution better than the beta distribution. In addition, this distribution presents good properties in the PERT methodology in relation to moderation and conservatism criteria. Two empirical applications are presented to demonstrate the adequateness of this new distribution.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
Geeta Nagpal
2013-02-01
Full Text Available Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources. This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Shin, Yoonseok
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Yoonseok Shin
2015-01-01
Full Text Available Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Improved measurements of RNA structure conservation with generalized centroid estimators
Yohei eOkada
2011-08-01
Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.
Improved measurements of RNA structure conservation with generalized centroid estimators.
Okada, Yohei; Saito, Yutaka; Sato, Kengo; Sakakibara, Yasubumi
2011-01-01
Identification of non-protein-coding RNAs (ncRNAs) in genomes is a crucial task for not only molecular cell biology but also bioinformatics. Secondary structures of ncRNAs are employed as a key feature of ncRNA analysis since biological functions of ncRNAs are deeply related to their secondary structures. Although the minimum free energy (MFE) structure of an RNA sequence is regarded as the most stable structure, MFE alone could not be an appropriate measure for identifying ncRNAs since the free energy is heavily biased by the nucleotide composition. Therefore, instead of MFE itself, several alternative measures for identifying ncRNAs have been proposed such as the structure conservation index (SCI) and the base pair distance (BPD), both of which employ MFE structures. However, these measurements are unfortunately not suitable for identifying ncRNAs in some cases including the genome-wide search and incur high false discovery rate. In this study, we propose improved measurements based on SCI and BPD, applying generalized centroid estimators to incorporate the robustness against low quality multiple alignments. Our experiments show that our proposed methods achieve higher accuracy than the original SCI and BPD for not only human-curated structural alignments but also low quality alignments produced by CLUSTAL W. Furthermore, the centroid-based SCI on CLUSTAL W alignments is more accurate than or comparable with that of the original SCI on structural alignments generated with RAF, a high quality structural aligner, for which twofold expensive computational time is required on average. We conclude that our methods are more suitable for genome-wide alignments which are of low quality from the point of view on secondary structures than the original SCI and BPD.
Bregman divergence as general framework to estimate unnormalized statistical models
Gutmann, Michael
2012-01-01
We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.
Fast Conceptual Cost Estimating of Aerospace Projects Using Historical Information
Butts, Glenn
2007-01-01
Accurate estimates can be created in less than a minute by applying powerful techniques and algorithms to create an Excel-based parametric cost model. In five easy steps you will learn how to normalize your company 's historical cost data to the new project parameters. This paper provides a complete, easy-to-understand, step by step how-to guide. Such a guide does not seem to currently exist. Over 2,000 hours of research, data collection, and trial and error, and thousands of lines of Excel Visual Basic Application (VBA) code were invested in developing these methods. While VBA is not required to use this information, it increases the power and aesthetics of the model. Implementing all of the steps described, while not required, will increase the accuracy of the results.
Some asymptotic results on density estimators by wavelet projections
Varron, Davit
2012-01-01
Let $(X_i)_{i\\geq 1}$ be an i.i.d. sample on $\\RRR^d$ having density $f$. Given a real function $\\phi$ on $\\RRR^d$ with finite variation and given an integer valued sequence $(j_n)$, let $\\fn$ denote the estimator of $f$ by wavelet projection based on $\\phi$ and with multiresolution level equal to $j_n$. We provide exact rates of almost sure convergence to 0 of the quantity $\\sup_{x\\in H}\\mid \\fn(x)-\\EEE(\\fn)(x)\\mid$, when $n2^{-dj_n}/\\log n \\rar \\infty$ and $H$ is a given hypercube of $\\RRR^d$. We then show that, if $n2^{-dj_n}/\\log n \\rar c$ for a constant $c>0$, then the quantity $\\sup_{x\\in H}\\mid \\fn(x)-f\\mid$ almost surely fails to converge to 0.
Projected estimators for robust semi-supervised classification
Krijthe, Jesse H.; Loog, Marco
2017-01-01
For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the procedure...... proposed in this work does not rely on assumptions that are not intrinsic to the classifier at hand. Using a projection of the supervised estimate onto a set of constraints imposed by the unlabeled data, we find we can safely improve over the supervised solution in terms of this quadratic loss. More...... specifically, we prove that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative. To our knowledge this is the first approach that offers such strong, albeit conservative, guarantees for improvement over...
AN ENHANCED MODEL TO ESTIMATE EFFORT, PERFORMANCE AND COST OF THE SOFTWARE PROJECTS
M. Pauline
2013-04-01
Full Text Available The Authors have proposed a model that first captures the fundamentals of software metrics in the phase 1 consisting of three primitive primary software engineering metrics; they are person-months (PM, function-points (FP, and lines of code (LOC. The phase 2 consists of the proposed function point which is obtained by grouping the adjustment factors to simplify the process of adjustment and to ensure more consistency in the adjustments. In the proposed method fuzzy logic is used for quantifying the quality of requirements and is added as one of the adjustment factor, thus a fuzzy based approach for the Enhanced General System Characteristics to Estimate Effort of the Software Projects using productivity has been obtained. The phase 3 takes the calculated function point from our work and is given as input to the static single variable model (i.e. to the Intermediate COCOMO and COCOMO II for cost estimation. The Authors have tailored the cost factors in intermediate COCOMO and both; cost and scale factors are tailored in COCOMO II to suite to the individual development environment, which is very important for the accuracy of the cost estimates. The software performance indicators are project duration, schedule predictability, requirements completion ratio and post-release defect density, are also measured for the software projects in my work. A comparative study for effort, performance measurement and cost estimation of the software project is done between the existing model and the authors proposed work. Thus our work analyzes the interaction¬al process through which the estimation tasks were collectively accomplished.
Breckinridge Project, initial effort. Report VIII. Capital cost estimate
None
1982-01-01
The major objective of the Initial Effort for the Breckinridge Project is to develop engineering to the point where realistic economics for the construction and operation of the plant can be made. The plant is designed to process 23,000 tons per day of run-of-mine coal to produce a nominal 50,000 barrels per day of liquid products using the H-COAL and standard industry technology. The plant will be located in Breckinridge County, Kentucky. Considerable preliminary engineering has been performed for this estimate. This work uses a single-point design based on the Process Demonstration Unit (PDU) data from run 5, period 29 of the pilot plant. The design basis is discussed in Volume II of this report. Many aspects of plant construction and cost have been considered that were not taken into account in the past studies. Ashland and Bechtel believe the accuracy of the capital estimate to be +19%, -17%. This accuracy is postulated on January 1981 dollars, the as-spent dollar amount naturally depending upon the inflation rate through the construction period. Considerable attention has been devoted to reliability of operation, and redundant equipment has been used where it was deemed necessary to assure reasonable onstream time. This equipment is included in the capital estimate. The capital is summarized by total plant cost on Table 1. The subtotal plant cost, excluding contingency, fee, and adjustment is $2,710,940,000. Adding the contingency, fee and adjustment, the total depreciable cost of the plant is $3,167,430,000. Adding the working capital to the total plant cost results in total capital requirements of $3,258,430,000 as shown on the individual plant cost summary Table 2.
Liu Gang
2009-01-01
Full Text Available By using the methods of linear algebra and matrix inequality theory, we obtain the characterization of admissible estimators in the general multivariate linear model with respect to inequality restricted parameter set. In the classes of homogeneous and general linear estimators, the necessary and suffcient conditions that the estimators of regression coeffcient function are admissible are established.
L~p estimates for -equation on generalized complex ellipsoids
王伟
2000-01-01
The estimate of a holomorphic supporting function for the generalized complex ellipsoid in is given, This domain is not decoupled. By using this estimate, the best possible Lp estimates for the equation and some results of function theory on generalized complex ellipsoids are proved.
Tsai, Miao-Yu
2017-04-15
The concordance correlation coefficient (CCC) is a commonly accepted measure of agreement between two observers for continuous responses. This paper proposes a generalized estimating equations (GEE) approach allowing dependency between repeated measurements over time to assess intra-agreement for each observer and inter- and total agreement among multiple observers simultaneously. Furthermore, the indices of intra-, inter-, and total agreement through variance components (VC) from an extended three-way linear mixed model (LMM) are also developed with consideration of the correlation structure of longitudinal repeated measurements. Simulation studies are conducted to compare the performance of the GEE and VC approaches for repeated measurements from longitudinal data. An application of optometric conformity study is used for illustration. In conclusion, the GEE approach allowing flexibility in model assumptions and correlation structures of repeated measurements gives satisfactory results with small mean square errors and nominal 95% coverage rates for large data sets, and when the assumption of the relationship between variances and covariances for the extended three-way LMM holds, the VC approach performs outstandingly well for all sample sizes. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Study on Top-Down Estimation Method of Software Project Planning
ZHANG Jun-guang; L(U) Ting-jie; ZHAO Yu-mei
2006-01-01
This paper studies a new software project planning method under some actual project data in order to make software project plans more effective. From the perspective of system theory, our new method regards a software project plan as an associative unit for study. During a top-down estimation of a software project, Program Evaluation and Review Technique (PERT) method and analogy method are combined to estimate its size, then effort estimation and specific schedules are obtained according to distributions of the phase effort. This allows a set of practical and feasible planning methods to be constructed. Actual data indicate that this set of methods can lead to effective software project planning.
Parameter Estimation for a Computable General Equilibrium Model
Arndt, Channing; Robinson, Sherman; Tarp, Finn
. Second, it permits incorporation of prior information on parameter values. Third, it can be applied in the absence of copious data. Finally, it supplies measures of the capacity of the model to reproduce the historical record and the statistical significance of parameter estimates. The method is applied...
Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik
2008-01-01
estimate of the associated uncertainty. This uncertainty arises from incomplete process representation, uncertainty in initial conditions, input, output and parameter error. The generalized likelihood uncertainty estimation (GLUE) framework was one of the first attempts to represent prediction uncertainty...
Estimating the generalized concordance correlation coefficient through variance components.
Carrasco, Josep L; Jover, Lluís
2003-12-01
The intraclass correlation coefficient (ICC) and the concordance correlation coefficient (CCC) are two of the most popular measures of agreement for variables measured on a continuous scale. Here, we demonstrate that ICC and CCC are the same measure of agreement estimated in two ways: by the variance components procedure and by the moment method. We propose estimating the CCC using variance components of a mixed effects model, instead of the common method of moments. With the variance components approach, the CCC can easily be extended to more than two observers, and adjusted using confounding covariates, by incorporating them in the mixed model. A simulation study is carried out to compare the variance components approach with the moment method. The importance of adjusting by confounding covariates is illustrated with a case example.
Generalized watermark attack based on watermark estimation and perceptual remodulation
Voloshynovskyy, Svyatoslav; Pereira, Shelby; Herrigel, Alexander; Baumgartner, Nazanin; Pun, Thierry
2000-01-01
Digital image watermarking has become a popular technique for authentication and copyright protection. For verifying the security and robustness of watermarking algorithms, specific attacks have to be applied to test the proposed algorithms. In contrast to the known Stirmark attack, which degrades the quality of the image while destroying the watermark, this paper presents a new approach which is based on the estimation of a watermark and the exploitation of the properties of Human Visual Sys...
Estimation of the reference values of coefficients of confidence in innovative project success
L.K. Hlinenko
2015-09-01
Full Text Available The aim of the article. Probability of successful innovative decision-making on the early stages of new product development (NPD process can be increased by abandonment from further development of the decisions deliberately doomed to failure. Such decisions can be identified by certain values of some critical characteristics of innovative system. Ultimate impact of some features of innovative system, e.g. state and characteristics of a new product, innovator, market, NPD process etc. is generally acknowledged though no methodology of the practical taking into account the values of these factors for estimation of project feasibility and risks is available. To great extent it can be explained by a variety of indicators by which the influence of these factors is quantitatively appraised. A purpose of this research was to substantiate the possibility of method application of coefficients (equivalents of project success confidence for feasibility estimation and selection of innovative projects on the early NPD stages on the basis of comparison of values of these coefficients for actual values of innovative system characteristics for a certain project with a priori confidence coefficients for successful projects and to develop a methodology of determination the priori values of these confidence coefficients for different variants of empiric data presentation concerning impact of success driving factors on project results. The results of the analysis. An advantage of method of confidence coefficients as method of innovative project feasibility estimation and selection consists in the possibility of recalculation of various quantitative indexes (probability of success, failure, risk etc. and qualitative statements of experts as well as quantitative statistical data referring to probable influence of the critical factors on project success to the value of the confidence coefficient. Extending the estimation of confidence coefficients to the revealed
Wu, Xiangjun; Lu, Hongtao
2010-08-01
In this Letter, generalized projective synchronization (GPS) between two different complex dynamical networks with delayed coupling is investigated. Two complex networks are distinct if they have diverse node dynamics, or different number of nodes, or different topological structures. By using the adaptive control scheme, a sufficient synchronization criterion for this GPS is derived based on the LaSalle invariance principle. Three corollaries are also obtained. It is noticed that the synchronization speed sensitively depends on the adjustable positive constants μ. Furthermore, the coupling configuration matrix is not necessary to be symmetric or irreducible, and the inner coupling matrix need not be symmetric. In addition, the node dynamic need not satisfy the very strong and conservative uniformly Lipschitz condition. Numerical simulations further demonstrate the feasibility and effectiveness of the theoretical results.
Wu Xiangjun, E-mail: wuhsiang@yahoo.c [Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Institute of Complex Intelligent Network System, Henan University, Kaifeng 475004 (China); Computing Center, Henan University, Kaifeng 475004 (China); Lu Hongtao [Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)
2010-08-23
In this Letter, generalized projective synchronization (GPS) between two different complex dynamical networks with delayed coupling is investigated. Two complex networks are distinct if they have diverse node dynamics, or different number of nodes, or different topological structures. By using the adaptive control scheme, a sufficient synchronization criterion for this GPS is derived based on the LaSalle invariance principle. Three corollaries are also obtained. It is noticed that the synchronization speed sensitively depends on the adjustable positive constants {mu}{sub i}. Furthermore, the coupling configuration matrix is not necessary to be symmetric or irreducible, and the inner coupling matrix need not be symmetric. In addition, the node dynamic need not satisfy the very strong and conservative uniformly Lipschitz condition. Numerical simulations further demonstrate the feasibility and effectiveness of the theoretical results.
MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL
Vinod Kumar
2010-01-01
Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.
Generalization of Supervised Learning for Binary Mask Estimation
May, Tobias; Gerkmann, Timo
2014-01-01
This paper addresses the problem of speech segregation by es- timating the ideal binary mask (IBM) from noisy speech. Two methods will be compared, one supervised learning approach that incorporates a priori knowledge about the feature distri- bution observed during training. The second method...... solely relies on a frame-based speech presence probability (SPP) es- timation, and therefore, does not depend on the acoustic con- dition seen during training. We investigate the influence of mismatches between the acoustic conditions used for training and testing on the IBM estimation performance...
Penalized maximum likelihood estimation for generalized linear point processes
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we...
Breckinridge Project, initial effort. Report IX. Operating cost estimate
None
1982-01-01
Operating costs are normally broken into three major categories: variable costs including raw materials, annual catalyst and chemicals, and utilities; semi-variable costs including labor and labor related cost; and fixed or capital related charges. The raw materials and utilities costs are proportional to production; however, a small component of utilities cost is independent of production. The catalyst and chemicals costs are also normally proportional to production. Semi-variable costs include direct labor, maintenance labor, labor supervision, contract maintenance, maintenance materials, payroll overheads, operation supplies, and general overhead and administration. Fixed costs include local taxes, insurance and the time value of the capital investment. The latter charge often includes the investor's anticipated return on investment. In determining operating costs for financial analysis, return on investment (ROI) and depreciation are not treated as cash operating costs. These costs are developed in the financial analysis; the annual operating cost determined here omits ROI and depreciation. Project Annual Operating Costs are summarized in Table 1. Detailed supporting information for the cost elements listed below is included in the following sections: Electrical, catalyst and chemicals, and salaries and wages.
Abran, Alain
2015-01-01
Software projects are often late and over-budget and this leads to major problems for software customers. Clearly, there is a serious issue in estimating a realistic, software project budget. Furthermore, generic estimation models cannot be trusted to provide credible estimates for projects as complex as software projects. This book presents a number of examples using data collected over the years from various organizations building software. It also presents an overview of the non-for-profit organization, which collects data on software projects, the International Software Benchmarking Stan
Estimation of Phylogeny Using a General Markov Model
John Robinson
2005-01-01
Full Text Available The non-homogeneous model of nucleotide substitution proposed by Barry and Hartigan (Stat Sci, 2: 191-210 is the most general model of DNA evolution assuming an independent and identical process at each site. We present a computational solution for this model, and use it to analyse two data sets, each violating one or more of the assumptions of stationarity, homogeneity, and reversibility. The log likelihood values returned by programs based on the F84 model (J Mol Evol, 29: 170-179, the general time reversible model (J Mol Evol, 20: 86-93, and Barry and Hartigan’s model are compared to determine the validity of the assumptions made by the first two models. In addition, we present a method for assessing whether sequences have evolved under reversible conditions and discover that this is not so for the two data sets. Finally, we determine the most likely tree under the three models of DNA evolution and compare these with the one favoured by the tests for symmetry.
Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.
Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis
2015-04-01
Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries.
Estimation of delays in generalized asynchronous Boolean networks.
Das, Haimabati; Layek, Ritwik Kumar
2016-10-20
A new generalized asynchronous Boolean network (GABN) model has been proposed in this paper. This continuous-time discrete-state model captures the biological reality of cellular dynamics without compromising the computational efficiency of the Boolean framework. The GABN synthesis procedure is based on the prior knowledge of the logical structure of the regulatory network, and the experimental transcriptional parameters. The novelty of the proposed methodology lies in considering different delays associated with the activation and deactivation of a particular protein (especially the transcription factors). A few illustrative examples of some well-studied network motifs have been provided to explore the scope of using the GABN model for larger networks. The GABN model of the p53-signaling pathway in response to γ-irradiation has also been simulated in the current paper to provide an indirect validation of the proposed schema.
dglars: An R Package to Estimate Sparse Generalized Linear Models
Luigi Augugliaro
2014-09-01
Full Text Available dglars is a publicly available R package that implements the method proposed in Augugliaro, Mineo, and Wit (2013, developed to study the sparse structure of a generalized linear model. This method, called dgLARS, is based on a differential geometrical extension of the least angle regression method proposed in Efron, Hastie, Johnstone, and Tibshirani (2004. The core of the dglars package consists of two algorithms implemented in Fortran 90 to efficiently compute the solution curve: a predictor-corrector algorithm, proposed in Augugliaro et al. (2013, and a cyclic coordinate descent algorithm, proposed in Augugliaro, Mineo, and Wit (2012. The latter algorithm, as shown here, is significantly faster than the predictor-corrector algorithm. For comparison purposes, we have implemented both algorithms.
Generalized equations for estimating DXA percent fat of diverse young women and men: The Tiger Study
Popular generalized equations for estimating percent body fat (BF%) developed with cross-sectional data are biased when applied to racially/ethnically diverse populations. We developed accurate anthropometric models to estimate dual-energy x-ray absorptiometry BF% (DXA-BF%) that can be generalized t...
An Estimator of Heavy Tail Index through the Generalized Jackknife Methodology
Weiqi Liu
2014-01-01
Full Text Available In practice, sometimes the data can be divided into several blocks but only a few of the largest observations within each block are available to estimate the heavy tail index. To address this problem, we propose a new class of estimators through the Generalized Jackknife methodology based on Qi’s estimator (2010. These estimators are proved to be asymptotically normal under suitable conditions. Compared to Hill’s estimator and Qi’s estimator, our new estimator has better asymptotic efficiency in terms of the minimum mean squared error, for a wide range of the second order shape parameters. For the finite samples, our new estimator still compares favorably to Hill’s estimator and Qi’s estimator, providing stable sample paths as a function of the number of dividing the sample into blocks, smaller estimation bias, and MSE.
A generalized model for estimating the energy density of invertebrates
James, Daniel A.; Csargo, Isak J.; Von Eschen, Aaron; Thul, Megan D.; Baker, James M.; Hayer, Cari-Ann; Howell, Jessica; Krause, Jacob; Letvin, Alex; Chipps, Steven R.
2012-01-01
Invertebrate energy density (ED) values are traditionally measured using bomb calorimetry. However, many researchers rely on a few published literature sources to obtain ED values because of time and sampling constraints on measuring ED with bomb calorimetry. Literature values often do not account for spatial or temporal variability associated with invertebrate ED. Thus, these values can be unreliable for use in models and other ecological applications. We evaluated the generality of the relationship between invertebrate ED and proportion of dry-to-wet mass (pDM). We then developed and tested a regression model to predict ED from pDM based on a taxonomically, spatially, and temporally diverse sample of invertebrates representing 28 orders in aquatic (freshwater, estuarine, and marine) and terrestrial (temperate and arid) habitats from 4 continents and 2 oceans. Samples included invertebrates collected in all seasons over the last 19 y. Evaluation of these data revealed a significant relationship between ED and pDM (r2 = 0.96, p calorimetry approaches. This model should prove useful for a wide range of ecological studies because it is unaffected by taxonomic, seasonal, or spatial variability.
General Trimmed Estimation : Robust Approach to Nonlinear and Limited Dependent Variable Models
Cizek, P.
2004-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. Motivated by some { the least trimmed squares and maximum trimmed like- lihood estimators { we propose a general trimmed estimator, which unifies and extends many existing robust procedures. We derive
Smagin, V. V.
1997-04-01
We consider a weakly solvable parabolic problem in a separable Hilbert space. We seek approximations to the exact solution by projective and projective-difference methods. In this connection the discretization of the problem with respect to the spatial variables is carried out by the semidiscrete method of Galerkin, and with respect to time by the implicit method of Euler. In this paper we establish a coercive mean-square error estimate for the approximate solutions. We illustrate the effectiveness of these estimates with parabolic equations of second order with Dirichlet or Neumann boundary conditions in projective subspaces of finite element type.
How Do Biases in General Circulation Models Affect Projections of Aridity and Drought?
Ficklin, D. L.; Abatzoglou, J. T.; Robeson, S. M.; Dufficy, A. L.
2015-12-01
Unless corrected, biases in General Circulation Models (GCMs) can affect hydroclimatological applications and projections. Compared to a raw GCM ensemble (direct GCM output), bias-corrected GCM inputs correct for systematic errors and can produce high-resolution projections that are useful for impact analyses. By examining the difference between raw and bias-corrected GCMs for the continental United States, this work highlights how GCM biases can affect projections of aridity (defined as precipitation (P)/potential evapotranspiration (PET)) and drought (using the Palmer Drought Severity Index (PDSI)). At the annual time scale for spatial averages over the continental United States, the raw GCM ensemble median has a historical positive precipitation bias (+24%) and negative PET bias (-7%) compared to the bias-corrected output. While both GCM ensembles (raw and bias-corrected) result in drier conditions in the future, the bias-corrected GCMs produce enhanced aridity (number of months with PET>P) in the late 21st century (2070-2099) compared to the historical climate (1950-1979). For the western United States, the bias-corrected GCM ensemble estimates much less humid and sub-humid conditions (based on P/PET categorical values) than the raw GCM ensemble. However, using June, July, and August PDSI, the bias-corrected GCM ensemble projects less acute decreases for the southwest United States compared to the raw GCM ensemble (1 to 2 PDSI units higher) as a result of larger decreases in projected precipitation in the raw GCM ensemble. A number of examples and ecological implications of this work for the western United States will be presented.
Judd, Linda J.; Asquith, William H.; Slade, Raymond M.
1996-01-01
This report presents two techniques to estimate generalized skew coefficients used for log-Pearson Type III peak-streamflow frequency analysis of natural basins in Texas. A natural basin has less than 10 percent impervious cover, and less than 10 percent of its drainage area is controlled by reservoirs. The estimation of generalized skew coefficients is based on annual peak and historical peak streamflow for all U.S. Geological Survey streamflow-gaging stations having at least 20 years of annual peak-streamflow record from natural basins in Texas. Station skew coefficients calculated for each of 255 Texas stations were used to estimate generalized skew coefficients for Texas.
Truin Gert-Jan
2011-10-01
Full Text Available Abstract Background Considering the changes in dental healthcare, such as the increasing assertiveness of patients, the introduction of new dental professionals, and regulated competition, it becomes more important that general dental practitioners (GDPs take patients' views into account. The aim of the study was to compare patients' views on organizational aspects of general dental practices with those of GDPs and with GDPs' estimation of patients' views. Methods In a survey study, patients and GDPs provided their views on organizational aspects of a general dental practice. In a second, separate survey, GDPs were invited to estimate patients' views on 22 organizational aspects of a general dental practice. Results For 4 of the 22 aspects, patients and GDPs had the same views, and GDPs estimated patients' views reasonably well: 'Dutch-speaking GDP', 'guarantee on treatment', 'treatment by the same GDP', and 'reminder of routine oral examination'. For 2 aspects ('quality assessment' and 'accessibility for disabled patients' patients and GDPs had the same standards, although the GDPs underestimated the patients' standards. Patients had higher standards than GDPs for 7 aspects and lower standards than GDPs for 8 aspects. Conclusion On most aspects GDPs and patient have different views, except for social desirable aspects. Given the increasing assertiveness of patients, it is startling the GDP's estimated only half of the patients' views correctly. The findings of the study can assist GDPs in adapting their organizational services to better meet the preferences of their patients and to improve the communication towards patients.
2012-08-09
... Conformity Analysis In accordance with the National Environmental Policy Act of 1969, the Clean Air Act and... prepared this draft General Conformity Determination (GCD) for the Appalachian Gateway Project (Project) to... the Project will achieve conformity in Pennsylvania with the use of Pennsylvania Department...
Extremum of geometric functionals involving general L p $L_{p}$ -projection bodies
Weidong Wang
2016-05-01
Full Text Available Abstract Following the discovery of general L p $L_{p}$ -projection bodies by Ludwig, Haberl and Schuster determined the extremum of the volume of the polars of this family of L p $L_{p}$ -projection bodies. In this paper, the result of Haberl and Schuster is extended to all dual quermassintegrals, and a dual counterpart for the quermassintegrals of general L p $L_{p}$ -projection bodies is also obtained. Moreover, the extremum of the L q $L_{q}$ -dual affine surface areas of polars of general L p $L_{p}$ -projection bodies are determined.
Actionable Data Projects: Social Science and Service-Learning in General Education Courses
Maloyed, Christie L.
2016-01-01
The use of service-learning pedagogies in general education courses is often limited to increasing volunteerism or civic literacy with problem-based or research-based projects reserved for upper level courses. This article examines the implementation of an "actionable data" service-learning project in an introductory, general studies…
Actionable Data Projects: Social Science and Service-Learning in General Education Courses
Maloyed, Christie L.
2016-01-01
The use of service-learning pedagogies in general education courses is often limited to increasing volunteerism or civic literacy with problem-based or research-based projects reserved for upper level courses. This article examines the implementation of an "actionable data" service-learning project in an introductory, general studies…
Parallel Enhancements of the General Mission Analysis Tool Project
National Aeronautics and Space Administration — The General Mission Analysis Tool (GMAT) is a state of the art spacecraft mission design tool under active development at NASA's Goddard Space Flight Center (GSFC)....
Generalized Geophysical Retrieval and Analysis Tool for Planetary Atmospheres Project
National Aeronautics and Space Administration — CPI proposes to develop an innovative, generalized retrieval algorithm and analysis tool (GRANT) that will facilitate analysis of remote sensing data from both...
Zengmei, L.; Guanghua, Q.; Zishen, C.
2015-05-01
The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The
A case-based reasoning approach for estimating the costs of pump station projects
Mohamed M. Marzouk
2011-10-01
Full Text Available The effective estimation of costs is crucial to the success of construction projects. Cost estimates are used to evaluate, approve and/or fund projects. Organizations use some form of classification system to identify the various types of estimates that may be prepared during the lifecycle of a project. This research presents a parametric-cost model for pump station projects. Fourteen factors have been identified as important to the influence of the cost of pump station projects. A data set that consists of forty-four pump station projects (fifteen water and twenty-nine waste water are collected to build a Case-Based Reasoning (CBR library and to test its performance. The results obtained from the CBR tool are processed and adopted to improve the accuracy of the results. A numerical example is presented to demonstrate the development of the effectiveness of the tool.
赵旭; 薛留根; 李婧兰; 程维虎
2012-01-01
The generalized Pareto distribution（GPD） is one of the most important distribution in statistics analysis.This paper is based on sample quantiles of the GPD.First,the shape parameter estimator that has high estimated precision is solved,then the approximated generalized least squares estimation expressions of the location and scale parameters are obtained for the GPD.The proposed method is easy and has no limitation for the shape parameter.In addition,it has high estimation accuracy under Monte-Carlo simulation tests.%广义Pareto分布（generalized Pareto distribution,GPD）是统计分析中的一个极为重要的分布.对基于广义Pareto分布的若干个样本分位数进行了研究.首先,求解具有较高精度的形状参数的参数估计;其次,得出广义Pareto分布位置参数及尺度参数的近似广义最小二乘估计.本方法简单易行,对形状参数的存在条件没有限制,通过Monte Carlo模拟验证了该方法具有较高的精度.
J. A. Reshi
2014-12-01
Full Text Available In this paper, a new class of Size-biased Generalized Gamma (SBGG distribution is defined. A Size-biased Generalized Gamma (SBGG distribution, a particular case of weighted Generalized Gamma distribution, taking the weights as the variate values has been defined. The important statistical properties including hazard functions, reverse hazard functions, mode, moment generating function, characteristic function, Shannon’s entropy, generalized entropy and Fisher’s information matrix of the new model have been derived and studied. Here, we also study SBGG entropy estimation, Akaike and Bayesian information criterion. A likelihood ratio test for size-biasedness is conducted. The estimation of parameters is obtained by employing the classical methods of estimation especially method of moments and maximum likelihood estimator.
Software Cost Estimation Review
Ongere, Alphonce
2013-01-01
Software cost estimation is the process of predicting the effort, the time and the cost re-quired to complete software project successfully. It involves size measurement of the soft-ware project to be produced, estimating and allocating the effort, drawing the project schedules, and finally, estimating overall cost of the project. Accurate estimation of software project cost is an important factor for business and the welfare of software organization in general. If cost and effort estimat...
Simultaneous Optimality of LSE and ANOVA Estimate in General Mixed Models
Mi Xia WU; Song Gui WANG; Kai Fun YU
2008-01-01
Problems of the simultaneous optimal estimates and the optimal tests in general mixed models are considered.A necessary and sufficient condition is presented for the least squares estimate of the fixed effects and the analysis of variance (Hendreson III's) estimate of variance components being uniformly minimum variance unbiased estimates simultaneously.This result can be applied to the problems of finding uniformly optimal unbiased tests and uniformly most accurate unbiased confidential interval on parameters of interest,and for finding equivalences of several common estimates of variance components.
Outliers,inliers and the generalized least trimmed squares estimator in system identification
Erwei BAI
2003-01-01
The least trimmed squares estimator (LTS) is a well known robust estimator in terms of protecting the estimate fiom the outliers. Its high computational complexity is however a problem in practice. We show that the LTS estimate can be obtained by a simple algorithm with the complexity O( NIn N) for hrge N, where N is the number of measurements. We also show that though the LTS is robust in terms of the outliers, it is sensitive to the inliers. The concept of the inliers is introduced. Moreover, the Generalized Least Trimmed Squares estimator (GLTS) together with its solution are presented that reduces the effect of both the outliers and the inliers.
Estimating Performance Time for Air Force Military Construction Projects
2005-03-01
Arditi et al, (1985) NEDO, (1988) Mansfield et al, (1994) Naoum, (1991) Assaf et al, (1995) Chan and Kumaraswam y, (1997) Kaming et al, (1997...00 0, 00 0 Project Cost D ur at io n (d ay s) Regression Line Data Upper/Lower Quartiles 212 Bibliography Arditi ,D., G.T. Akan, and S. Gurdamer
HFCs contribution to the greenhouse effect. Present and projected estimations
Libre, J.M.; Elf-Atochem, S.A. [Central Research & Development, Paris (France)
1997-12-31
This paper reviews data that can be used to calculate hydrofluorocarbon (HFC) contribution to the greenhouse effect and compare it to other trace gas contributions. Projections are made for 2010 and 2100 on the basis of available emission scenarios. Industrial judgement on the likelihood of those scenarios is also developed. Calculations can be made in two different ways: from Global Warming Potential weighted emissions of species or by direct calculation of radiative forcing based on measured and projected atmospheric concentrations of compounds. Results show that HFCs corresponding to commercial uses have a negligible contribution to the greenhouse effect in comparison with other trace gases. The projected contributions are also very small even if very high emission scenarios are maintained for decades. In 2010 this contribution remains below 1%. Longer term emissions projections are difficult. However, based on the IPCC scenario IS92a, in spite of huge emissions projected for the year 2100, the HFC contribution remains below 3%. Actually many factors indicate that the real UFC contribution to the greenhouse effect will be even smaller than presented here. Low emissive systems and small charges will likely improve sharply in the future and have drastically improved in the recent past. HFC technology implementation is likely to grow in the future, reach a maximum before the middle of the next century; the market will stabilise driven by recycling, closing of systems and competitive technologies. This hypothesis is supported by previous analysis of the demand for HTCs type applications which can be represented by {open_quotes}S{close_quotes} type curves and by recent analysis indicating that the level of substitution of old products by HFCs is growing slowly. On the basis of those data and best industrial judgement, the contribution of HFCs to the greenhouse effect is highly likely to remain below 1% during the next century. 11 refs., 2 figs., 5 tabs.
Feraru Galina Sergeevna
2014-11-01
Full Text Available The article addresses issues characterizing features of project management contributing to their competitive advantage; shows the factors and criteria of success of projects and the main reasons for their failures, making the failed efforts of developers to create projects.
Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model
Roberts, James S.; Thompson, Vanessa M.
2011-01-01
A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…
Zhang Ruo-Xun; Yang Shi-Ping
2008-01-01
This paper presents a general method of the generalized projective synchronization and the parameter identification between two different chaotic systems with unknown parameters.This approach is based on Lyapunov stability theory,and employs a combination of feedback control and adaptive control.With this method one can achieve the generalized projective synchronization and realize the parameter identifications between almost all chaotic (hyperchaotic) systems with unknown parameters.Numerical simulations results are presented to demonstrate the effectiveness of the method.
Higher Order Mean Squared Error of Generalized Method of Moments Estimators for Nonlinear Models
Yi Hu
2014-01-01
Full Text Available Generalized method of moments (GMM has been widely applied for estimation of nonlinear models in economics and finance. Although generalized method of moments has good asymptotic properties under fairly moderate regularity conditions, its finite sample performance is not very well. In order to improve the finite sample performance of generalized method of moments estimators, this paper studies higher-order mean squared error of two-step efficient generalized method of moments estimators for nonlinear models. Specially, we consider a general nonlinear regression model with endogeneity and derive the higher-order asymptotic mean square error for two-step efficient generalized method of moments estimator for this model using iterative techniques and higher-order asymptotic theories. Our theoretical results allow the number of moments to grow with sample size, and are suitable for general moment restriction models, which contains conditional moment restriction models as special cases. The higher-order mean square error can be used to compare different estimators and to construct the selection criteria for improving estimator’s finite sample performance.
ESTIMATING RISK TO CALIFORNIA ENERGY INFRASTRUCTURE FROM PROJECTED CLIMATE CHANGE
Sathaye, Jayant
2011-01-01
ABSTRACT This report outlines the results of a study of the impact of climate change on the energy infrastructure of California and the San Francisco Bay region, including impacts on power plant generation; transmission line and substation capacity during heat spells; wildfires near transmission lines; sea level encroachment upon power plants, substations, and natural gas facilities; and peak electrical demand. Some end-of-century impacts were projected: Expected warming will decrease gas-fir...
Estimation of quantum states by weak and projective measurements
Das, Debmalya; Arvind
2014-06-01
We explore the possibility of using "weak" measurements to carry out quantum state tomography via numerical simulations. Given a fixed number of copies of identically prepared states of a qubit, we perform state tomography using weak as well as projective measurements. Due to the collapse of the state after measurement, we cannot reuse the state after a projective measurement. If the coupling strength between the quantum system and the measurement device is made weaker, the disturbance caused to the state can be lowered. This then allows us to reuse the same member of the ensemble for further measurements and thus extract more information from the system. However, this happens at the cost of getting imprecise information from the first measurement. We implement this scheme for a single qubit and show that under certain circumstances, it can outperform the projective measurement-based tomography scheme. This opens up the possibility of new ways of extracting information from quantum ensembles. We study the efficacy of this scheme for different coupling strengths and different ensemble sizes.
A General, Mass-Preserving Navier-Stokes Projection Method
Salac, David
2015-01-01
The conservation of mass is common issue with multiphase fluid simulations. In this work a novel projection method is presented which conserves mass both locally and globally. The fluid pressure is augmented with a time-varying component which accounts for any global mass change. The resulting system of equations is solved using an efficient Schur-complement method. Using the proposed method four numerical examples are performed: the evolution of a static bubble, the rise of a bubble, the breakup of a thin fluid thread, and the extension of a droplet in shear flow. The method is capable of conserving the mass even in situations with morphological changes such as droplet breakup.
A generalization of Marstrand's theorem for projections of cartesian products
López, Jorge Erick
2011-01-01
We prove the following variant of Marstrand's theorem about projections of cartesian products of sets: Let $K_1,...,K_n$ Borel subsets of $\\mathbb R^{m_1},... ,\\mathbb R^{m_n}$ respectively, and $\\pi:\\mathbb R^{m_1}\\times...\\times\\mathbb R^{m_n}\\to\\mathbb R^k$ be a surjective linear map. We set $$\\mathfrak{m}:=\\min\\{\\sum_{i\\in I}\\dim_H(K_i) + \\dim\\pi(\\bigoplus_{i\\in I^c}\\mathbb R^{m_i}), I\\subset\\{1,...,n\\}, I\
General Projective Synchronization and Fractional Order Chaotic Masking Scheme
Shi-Quan Shao
2008-01-01
In this paper, a fractional order chaoticmasking scheme used for secure communication isintroduced. Based on the general projectivesynchronization of two coupled fractional Chen systems,a popular masking scheme is designed. Numericalexample is given to demonstrate the effectiveness of theproposed method.
Ortner, Tuulia M.; Vormittag, Isabella
2011-01-01
Effects of test administrator's gender on test takers' self-estimated verbal general knowledge and de facto verbal general knowledge were investigated. Based on three theories previously applied in research dealing with the effects of test administrator's ethnicity, it was expected male and female test takers to show higher scores under female…
Exponential estimation of generalized state-space time-delay systems
Lien, C-H; Yu, K-W [Department of Marine Engineering, National Kaohsiung Marine University, Taiwan 811 (China); Lin, J-S; Hung, M-L [Department of Electrical Engineering, Far East University, Tainan, Taiwan 744 (China)], E-mail: chlien@mail.nkmu.edu.tw
2008-02-15
In this paper, global exponential stability for a class of generalized state-space time-delay systems is considered. Delay-dependent criteria are proposed to guarantee the exponential stability and estimate the convergence rate for the generalized state-space systems with two cases of uncertainties. Finally, some numerical examples are illustrated to show the usefulness of the theory.
Yeong-Jeu Sun
2014-01-01
Full Text Available The generalized Liu system is firstly introduced and the state observation problem of such a system is explored. A simple state estimator for the generalized Liu system is developed to guarantee the global exponential stability of the resulting error system. Applications of proposed state estimator strategy to chaotic secure communication, circuit implementation, and numerical simulations are provided to show the effectiveness and feasibility of the obtained results. Besides, the guaranteed exponential convergence rate of the proposed state estimator and that of the proposed chaotic secure communication can be precisely calculated.
Zuliang Lu
2014-01-01
Full Text Available The aim of this work is to investigate the discretization of general linear hyperbolic convex optimal control problems by using the mixed finite element methods. The state and costate are approximated by the k order (k≥0 Raviart-Thomas mixed finite elements and the control is approximated by piecewise polynomials of order k. By applying the elliptic projection operators and Gronwall’s lemma, we derive a priori error estimates of optimal order for both the coupled state and the control approximation.
ESTIMATING RISK TO CALIFORNIA ENERGY INFRASTRUCTURE FROM PROJECTED CLIMATE CHANGE
Sathaye, Jayant; Dale, Larry; Larsen, Peter; Fitts, Gary; Koy, Kevin; Lewis, Sarah; Lucena, Andre
2011-06-22
This report outlines the results of a study of the impact of climate change on the energy infrastructure of California and the San Francisco Bay region, including impacts on power plant generation; transmission line and substation capacity during heat spells; wildfires near transmission lines; sea level encroachment upon power plants, substations, and natural gas facilities; and peak electrical demand. Some end-of-century impacts were projected:Expected warming will decrease gas-fired generator efficiency. The maximum statewide coincident loss is projected at 10.3 gigawatts (with current power plant infrastructure and population), an increase of 6.2 percent over current temperature-induced losses. By the end of the century, electricity demand for almost all summer days is expected to exceed the current ninetieth percentile per-capita peak load. As much as 21 percent growth is expected in ninetieth percentile peak demand (per-capita, exclusive of population growth). When generator losses are included in the demand, the ninetieth percentile peaks may increase up to 25 percent. As the climate warms, California's peak supply capacity will need to grow faster than the population.Substation capacity is projected to decrease an average of 2.7 percent. A 5C (9F) air temperature increase (the average increase predicted for hot days in August) will diminish the capacity of a fully-loaded transmission line by an average of 7.5 percent.The potential exposure of transmission lines to wildfire is expected to increase with time. We have identified some lines whose probability of exposure to fire are expected to increase by as much as 40 percent. Up to 25 coastal power plants and 86 substations are at risk of flooding (or partial flooding) due to sea level rise.
A projection and density estimation method for knowledge discovery.
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
A projection and density estimation method for knowledge discovery.
Adam Stanski
Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.
Estimating the ultimate bound and positively invariant set for a generalized Lorenz system
SHU Yong-lu; ZHANG Yong-hao
2008-01-01
A generalized Lyapunov function was employed to investigate the ultimate bound and positively invariant set of a generalized Lorenz system. We derived an ellipsoidal estimate of the ultimate bound and positively invariant set for the generalized Lorenz system, for all the positive values of system parameters a, b, and c. Our results extend the related result of Li, et al. [Li DM, Lu JA, Wu XQ, et al., Estimating the ultimate bound and positively invariant set for the Lorenz system and a unified chaotic system, Journal of Mathematical Analysis and Application, 2006, 323(2): 844-653].
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
A Neural Network Model for Construction Projects Site Overhead Cost Estimating in Egypt
ElSawy, Ismaail; Razek, Mohammed Abdel
2011-01-01
Estimating of the overhead costs of building construction projects is an important task in the management of these projects. The quality of construction management depends heavily on their accurate cost estimation. Construction costs prediction is a very difficult and sophisticated task especially when using manual calculation methods. This paper uses Artificial Neural Network (ANN) approach to develop a parametric cost-estimating model for site overhead cost in Egypt. Fifty-two actual real-life cases of building projects constructed in Egypt during the seven year period 2002-2009 were used as training materials. The neural network architecture is presented for the estimation of the site overhead costs as a percentage from the total project price.
张素英; 邓子辰
2004-01-01
For the constrained generalized Hamiltonian system with dissipation, by introducing Lagrange multiplier and using projection technique, the Lie group integration method was presented, which can preserve the inherent structure of dynamic system and the constraint-invariant. Firstly, the constrained generalized Hamiltonian system with dissipative was converted to the non-constraint generalized Hamiltonian system, then Lie group integration algorithm for the non-constraint generalized Hamiltonian system was discussed, finally the projection method for generalized Hamiltonian system with constraint was given. It is found that the constraint invariant is ensured by projection technique, and after introducing Lagrange multiplier the Lie group character of the dynamic system can't be destroyed while projecting to the constraint manifold. The discussion is restricted to the case of holonomic constraint. A presented numerical example shows the effectiveness of the method.
Casey, Daniel
1984-10-01
This assessment addresses the impacts to the wildlife populations and wildlife habitats due to the Hungry Horse Dam project on the South Fork of the Flathead River and previous mitigation of theses losses. In order to develop and focus mitigation efforts, it was first necessary to estimate wildlife and wildlife hatitat losses attributable to the construction and operation of the project. The purpose of this report was to document the best available information concerning the degree of impacts to target wildlife species. Indirect benefits to wildlife species not listed will be identified during the development of alternative mitigation measures. Wildlife species incurring positive impacts attributable to the project were identified.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Galili, Tal; Meilijson, Isaac
2016-01-01
The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.] PMID:27499547
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Integrating Portfolio Management and Simulation Concepts in the ERP Project Estimation Practice
Daneva, M.; Paech, B.; Rolland, C.
2008-01-01
This paper presents a two-site case study on requirements-based effort estimation practices in enterprise resource planning projects. Specifically, the case study investigated the question of how to handle qualitative data and highly volatile values of project context characteristics. We counterpart
Managing Uncertainty in ERP Project Estimation Practice: An Industrial Case Study
Daneva, Maia; Jedlitschka, A.; Salo, O.
2008-01-01
Uncertainty is a crucial element in managing projects. This paper’s aim is to shed some light into the issue of uncertain context factors when estimating the effort needed for implementing enterprise resource planning (ERP) projects. We outline a solution approach to this issue. It complementarily
Integrating Portfolio Management and Simulation Concepts in the ERP Project Estimation Practice
Daneva, Maia; Paech, B.; Rolland, C
2008-01-01
This paper presents a two-site case study on requirements-based effort estimation practices in enterprise resource planning projects. Specifically, the case study investigated the question of how to handle qualitative data and highly volatile values of project context characteristics. We counterpart
General Economic and Demographic Background and Projections for Indiana Library Services.
Foust, James D.; Tower, Carl B.
Before future library needs can be estimated, economic and demographic variables that influence the demand for library services must be projected and estimating equations relating library needs to economic and demographic parameters developed. This study considers the size, location and age-sex characteristics of Indiana's current population and…
NONE
1995-09-01
The Solid Waste Retrieval Facility--Phase 1 (Project W113) will provide the infrastructure and the facility required to retrieve from Trench 04, Burial ground 4C, contact handled (CH) drums and boxes at a rate that supports all retrieved TRU waste batching, treatment, storage, and disposal plans. This includes (1) operations related equipment and facilities, viz., a weather enclosure for the trench, retrieval equipment, weighing, venting, obtaining gas samples, overpacking, NDE, NDA, shipment of waste and (2) operations support related facilities, viz., a general office building, a retrieval staff change facility, and infrastructure upgrades such as supply and routing of water, sewer, electrical power, fire protection, roads, and telecommunication. Title I design for the operations related equipment and facilities was performed by Raytheon/BNFL, and that for the operations support related facilities including infrastructure upgrade was performed by KEH. These two scopes were combined into an integrated W113 Title II scope that was performed by Raytheon/BNFL. This volume represents the total estimated costs for the W113 facility. Operating Contractor Management costs have been incorporated as received from WHC. The W113 Facility TEC is $19.7 million. This includes an overall project contingency of 14.4% and escalation of 17.4%. A January 2001 construction contract procurement start date is assumed.
Cheng, Guang
2014-02-01
We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.
Wilkinson, D; Symon, B
2000-02-01
From Census data, to document the distribution of general practitioners in Australia and to estimate the number of general practitioners needed to achieve an equitable distribution accounting for community health need. Data on location of general practitioners, population size and crude mortality by statistical division (SD) were obtained from the Australian Bureau of Statistics. The number of patients per general practitioner by SD was calculated and plotted. Using crude mortality to estimate community health need, a ratio of the number of general practitioners per person: mortality was calculated for all Australia and for each SD (the Robin Hood Index). From this, the number of general practitioners needed to achieve equity was calculated. In all, 26,290 general practitioners were identified in 57 SDs. The mean number of people per general practitioner is 707, ranging from 551 to 1887. Capital city SDs have most favourable ratios. The Robin Hood Index for Australia is 1, and ranges from 0.32 (relatively under-served) to 2.46 (relatively over-served). Twelve SDs (21%) including all capital cities and 65% of all Australians, have a Robin Hood Index > 1. To achieve equity per capita 2489 more general practitioners (10% of the current workforce) are needed. To achieve equity by the Robin Hood Index 3351 (13% of the current workforce) are needed. The distribution of general practitioners in Australia is skewed. Nonmetropolitan areas are relatively underserved. Census data and the Robin Hood Index could provide a simple means of identifying areas of need in Australia.
GENERAL ALGORITHMIC SCHEMA OF THE PROCESS OF THE CHILL AUXILIARIES PROJECTION
A. N. Chichko
2006-01-01
Full Text Available The general algorithmic diagram of systematization of the existing approaches to the process of projection is offered and the foundation of computer system of the chill mold arming construction is laid.
Izergin-Korepin Analysis on the Projected Wavefunctions of the Generalized Free-Fermion Model
Kohei Motegi
2017-01-01
Full Text Available We apply the Izergin-Korepin analysis to the study of the projected wavefunctions of the generalized free-fermion model. We introduce a generalization of the L-operator of the six-vertex model by Bump-Brubaker-Friedberg and Bump-McNamara-Nakasuji. We make the Izergin-Korepin analysis to characterize the projected wavefunctions and show that they can be expressed as a product of factors and certain symmetric functions which generalizes the factorial Schur functions. This result can be seen as a generalization of the Tokuyama formula for the factorial Schur functions.
Generalization of Cramer's rule and its application to the projection of Hartree-Fock wave function
Hage-Hassan, Mehdi
2009-01-01
We generalize the Cramer's rule of linear algebra. We apply it to calculate the spectra of nucleus by applying Hill-Wheeler projection operator to Hartree-Fock wave function, and to derive L\\"owdin formula and Thouless theorem. We derive by an elementary method the infinitesimal or L\\"owdin projection operators and its integral representation to be useful for the projection of Slater determinant.
A Conjugate-Cyclic-Autocorrelation Projection-Based Algorithm for Signal Parameter Estimation
2006-01-01
Full Text Available A new algorithm to estimate amplitude, delay, phase, and frequency offset of a received signal is presented. The frequency-offset estimation is performed by maximizing, with respect to the conjugate cycle frequency, the projection of the measured conjugate-cyclic-autocorrelation function of the received signal over the true conjugate second-order cyclic autocorrelation. It is shown that this estimator is mean-square consistent, for moderate values of the data-record length, outperforms a previously proposed frequency-offset estimator, and leads to mean-square consistent estimators of the remaining parameters.
Messaoud Bounkhel
2015-01-01
Full Text Available The present paper is devoted to the study of the generalized projection πK:X∗→K, where X is a uniformly convex and uniformly smooth Banach space and K is a nonempty closed (not necessarily convex set in X. Our main result is the density of the points x∗∈X∗ having unique generalized projection over nonempty close sets in X. Some minimisation principles are also established. An application to variational problems with nonconvex sets is presented.
Messaoud Bounkhel
2015-01-01
The present paper is devoted to the study of the generalized projection πK:X∗→K, where X is a uniformly convex and uniformly smooth Banach space and K is a nonempty closed (not necessarily convex) set in X. Our main result is the density of the points x∗∈X∗ having unique generalized projection over nonempty close sets in X. Some minimisation principles are also established. An application to variational problems with nonconvex sets is presented.
Estimation of a general time-dependent Hamiltonian for a single qubit.
de Clercq, L E; Oswald, R; Flühmann, C; Keitch, B; Kienzler, D; Lo, H-Y; Marinelli, M; Nadlinger, D; Negnevitsky, V; Home, J P
2016-04-14
The Hamiltonian of a closed quantum system governs its complete time evolution. While Hamiltonians with time-variation in a single basis can be recovered using a variety of methods, for more general Hamiltonians the presence of non-commuting terms complicates the reconstruction. Here using a single trapped ion, we propose and experimentally demonstrate a method for estimating a time-dependent Hamiltonian of a single qubit. We measure the time evolution of the qubit in a fixed basis as a function of a time-independent offset term added to the Hamiltonian. The initially unknown Hamiltonian arises from transporting an ion through a static laser beam. Hamiltonian estimation allows us to estimate the spatial beam intensity profile and the ion velocity as a function of time. The estimation technique is general enough that it can be applied to other quantum systems, aiding the pursuit of high-operational fidelities in quantum control.
A generalization of Schur-Weyl duality with applications in quantum estimation
Marvian, Iman
2011-01-01
Schur-Weyl duality is a powerful tool in representation theory which has many applications to quantum information theory. We provide a generalization of this duality and demonstrate some of its applications. In particular, we use it to develop a general framework for the study of a family of quantum estimation problems wherein one is given n copies of an unknown quantum state according to some prior and the goal is to estimate certain parameters of the given state. In particular, we are interested to know whether collective measurements are useful and if so to find an upper bound on the amount of entanglement which is required to achieve the optimal estimation. In the case of pure states, we show that commutativity of the set of observables that define the estimation problem implies the sufficiency of unentangled measurements.
Koch, Stefan; Mitlöhner, Johann
2010-08-01
ERP implementation projects have received enormous attention in the last years, due to their importance for organisations, as well as the costs and risks involved. The estimation of effort and costs associated with new projects therefore is an important topic. Unfortunately, there is still a lack of models that can cope with the special characteristics of these projects. As the main focus lies in adapting and customising a complex system, and even changing the organisation, traditional models like COCOMO can not easily be applied. In this article, we will apply effort estimation based on social choice in this context. Social choice deals with aggregating the preferences of a number of voters into a collective preference, and we will apply this idea by substituting the voters by project attributes. Therefore, instead of supplying numeric values for various project attributes, a new project only needs to be placed into rankings per attribute, necessitating only ordinal values, and the resulting aggregate ranking can be used to derive an estimation. We will describe the estimation process using a data set of 39 projects, and compare the results to other approaches proposed in the literature.
Are the Experts Really Experts? a Cognitive Ergonomics Investigation for Project Estimations
Budi Hartono
2012-01-01
Full Text Available Uniqueness is a major characteristic of any project systems. Hence it is virtually in-feasible for project analysts to utilize data from past projects as references for subsequent project planning and scheduling. Most project analysts would then depend on intuition, gut feeling and experiences to develop quantitative models for project scheduling and analysis which, according to past studies, is prone towards systematic errors. This study attempts to investigate the perfor-mance of both ‘experts’ and ‘non-experts’ when utilizing their cognitive capability to estimate pro-ject durations in group/non-group settings. A cognitive ergonomics perspective -which views human capability to make judgment as rationally bounded - is utilized in this investigation. An empirical approach is used to inquiry data from ‘projects’ on which ‘experts’ and ‘non-experts’ are required to provide prior estimate on project durations. The estimates are then gauged against the actual duration. Results show that some systematic cognitive judgmental errors (biases are observable for both experts and non-experts. The identified biases include: anchoring bias as well as accuracy bias.
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
Non-parametric estimation of the availability in a general repairable system
Gamiz, M.L. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)], E-mail: mgamiz@ugr.es; Roman, Y. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)
2008-08-15
This work deals with repairable systems with unknown failure and repair time distributions. We focus on the estimation of the instantaneous availability, that is, the probability that the system is functioning at a given time, which we consider as the most significant measure for evaluating the effectiveness of a repairable system. The estimation of the availability function is not, in general, an easy task, i.e., analytical techniques are difficult to apply. We propose a smooth estimation of the availability based on kernel estimator of the cumulative distribution functions (CDF) of the failure and repair times, for which the bandwidth parameters are obtained by bootstrap procedures. The consistency properties of the availability estimator are established by using techniques based on the Laplace transform.
Remarks on low weight codewords of generalized affine and projective Reed-Muller codes
Ballet, Stéphane
2012-01-01
We make a brief survey on low weight codewords of generalized Reed-Muller codes and projective generalized Reed-Muller codes. In the affine case we give some information about the words that reach the second distance in cases where these words are not all characterized. Moreover we give the second weight of the projective Reed-Muller codes which was unknown until now. We relate the words of the projective Reed-Muller code reaching the second distance to the words of the affine Reed-Muller code reaching the second distance.
Jia Zhen; Lu Jun-An; Deng Guang-Ming; Zhang Qun-Jiao
2007-01-01
In this paper is investigated the generalized projective synchronization of a class of chaotic (or hyperchaotic) systems, in which certain parameters can be separated from uncertain parameters. Based on the adaptive technique, the globally generalized projective synchronization of two identical chaotic (hyperchaotic) systems is achieved by designing a novel nonlinear controller. Furthermore, the parameter identification is realized simultaneously. A sufficient condition for the globally projective synchronization is obtained. Finally, by taking the hyperchaotic Lü system as example, some numerical simulations are provided to demonstrate the effectiveness and feasibility of the proposed technique.
Bayesian STSA estimation using masking properties and generalized Gamma prior for speech enhancement
Parchami, Mahdi; Zhu, Wei-Ping; Champagne, Benoit; Plourde, Eric
2015-12-01
We consider the estimation of the speech short-time spectral amplitude (STSA) using a parametric Bayesian cost function and speech prior distribution. First, new schemes are proposed for the estimation of the cost function parameters, using an initial estimate of the speech STSA along with the noise masking feature of the human auditory system. This information is further employed to derive a new technique for the gain flooring of the STSA estimator. Next, to achieve better compliance with the noisy speech in the estimator's gain function, we take advantage of the generalized Gamma distribution in order to model the STSA prior and propose an SNR-based scheme for the estimation of its corresponding parameters. It is shown that in Bayesian STSA estimators, the exploitation of a rough STSA estimate in the parameter selection for the cost function and the speech prior leads to more efficient control on the gain function values. Performance evaluation in different noisy scenarios demonstrates the superiority of the proposed methods over the existing parametric STSA estimators in terms of the achieved noise reduction and introduced speech distortion.
ARTIFICIAL INTELLIGENCE TECHNIQUES FOR ESTIMATING THE EFFORT IN SOFTWARE DEVELOPMENT PROJECTS
Ferreira, G., Gálvez, D.,
2015-06-01
Full Text Available Among the most popular algorithmic cost and efforts estimation models are COCOMO, SLIM, Function Points. However, since the 90s, the models based on Artificial Intelligence techniques, mainly in Machine Learning techniques have been used to improve the accuracy of the estimates. These models are based on two fundamental aspects: the use of data collected in previous projects where estimates were performed and the application of various knowledge extraction techniques, with the idea of making estimates more efficiently, effectively and, if possible, with greater precision. The aim of this paper is to present an analysis of some of these techniques and how they are been applied in estimating the effort of software projects.
E. M. E. Zayed
2014-01-01
Full Text Available We apply the generalized projective Riccati equations method to find the exact traveling wave solutions of some nonlinear evolution equations with any-order nonlinear terms, namely, the nonlinear Pochhammer-Chree equation, the nonlinear Burgers equation and the generalized, nonlinear Zakharov-Kuznetsov equation. This method presents wider applicability for handling many other nonlinear evolution equations in mathematical physics.
A Computer-Based Laboratory Project for the Study of Stimulus Generalization and Peak Shift
Derenne, Adam; Loshek, Eevett
2009-01-01
This paper describes materials designed for classroom projects on stimulus generalization and peak shift. A computer program (originally written in QuickBASIC) is used for data collection and a Microsoft Excel file with macros organizes the raw data on a spreadsheet and creates generalization gradients. The program is designed for use with human…
van den Dungen C
2011-11-01
Full Text Available Abstract Background General practice based registration networks (GPRNs provide information on morbidity rates in the population. Morbidity rate estimates from different GPRNs, however, reveal considerable, unexplained differences. We studied the range and variation in morbidity estimates, as well as the extent to which the differences in morbidity rates between general practices and networks change if socio-demographic characteristics of the listed patient populations are taken into account. Methods The variation in incidence and prevalence rates of thirteen diseases among six Dutch GPRNs and the influence of age, gender, socio economic status (SES, urbanization level, and ethnicity are analyzed using multilevel logistic regression analysis. Results are expressed in median odds ratios (MOR. Results We observed large differences in morbidity rate estimates both on the level of general practices as on the level of networks. The differences in SES, urbanization level and ethnicity distribution among the networks' practice populations are substantial. The variation in morbidity rate estimates among networks did not decrease after adjusting for these socio-demographic characteristics. Conclusion Socio-demographic characteristics of populations do not explain the differences in morbidity estimations among GPRNs.
Jie Li DING; Xi Ru CHEN
2006-01-01
For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.
MA Qinghua; YANG Enhao
2000-01-01
An estimation method for solutions to the general linear system of Volterratype integral inequalities containing several iterated integral functionals is obtained. This method is based on a result proved by the present second author in Journ. Math. Anal. Appl.(1984). A certain two-dimensional system of nonlinear ordinary differential equations is also discussed to demonstrate the usefulness of our method.
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
A multivariate family-based association test using generalized estimating equations : FBAT-GEE
Lange, C; Silverman, SK; Xu, [No Value; Weiss, ST; Laird, NM
2003-01-01
In this paper we propose a multivariate extension of family-based association tests based on generalized estimating equations. The test can be applied to multiple phenotypes and to phenotypic data obtained in longitudinal studies without making any distributional assumptions for the phenotypic obser
Optimal error estimates for Fourier spectral approximation of the generalized KdV equation
Zhen-guo DENG; He-ping MA
2009-01-01
A Fourier spectral method for the generalized Korteweg-de Vrics equation with periodic boundary conditions is analyzed, and a corresponding optimal error esti-mate in L2-norm is obtained. It improves the result presented by Maday and Quarteroni. A modified Fourier pseudospectral method is also presented, with the same convergence properties as the Fourier spectral method.
Kovalchik, Stephanie A; Varadhan, Ravi; Fetterman, Barbara; Poitras, Nancy E; Wacholder, Sholom; Katki, Hormuzd A
2013-02-28
Estimates of absolute risks and risk differences are necessary for evaluating the clinical and population impact of biomedical research findings. We have developed a linear-expit regression model (LEXPIT) to incorporate linear and nonlinear risk effects to estimate absolute risk from studies of a binary outcome. The LEXPIT is a generalization of both the binomial linear and logistic regression models. The coefficients of the LEXPIT linear terms estimate adjusted risk differences, whereas the exponentiated nonlinear terms estimate residual odds ratios. The LEXPIT could be particularly useful for epidemiological studies of risk association, where adjustment for multiple confounding variables is common. We present a constrained maximum likelihood estimation algorithm that ensures the feasibility of risk estimates of the LEXPIT model and describe procedures for defining the feasible region of the parameter space, judging convergence, and evaluating boundary cases. Simulations demonstrate that the methodology is computationally robust and yields feasible, consistent estimators. We applied the LEXPIT model to estimate the absolute 5-year risk of cervical precancer or cancer associated with different Pap and human papillomavirus test results in 167,171 women undergoing screening at Kaiser Permanente Northern California. The LEXPIT model found an increased risk due to abnormal Pap test in human papillomavirus-negative that was not detected with logistic regression. Our R package blm provides free and easy-to-use software for fitting the LEXPIT model.
Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada
We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.
Yun-zhi Zou
2012-01-01
Full Text Available A new class of generalized dynamical systems involving generalized f-projection operators is introduced and studied in Banach spaces. By using the fixed-point theorem due to Nadler, the equilibrium points set of this class of generalized global dynamical systems is proved to be nonempty and closed under some suitable conditions. Moreover, the solutions set of the systems with set-valued perturbation is showed to be continuous with respect to the initial value.
Projection-Based linear constrained estimation and fusion over long-haul links
Rao, Nageswara S [ORNL
2016-01-01
We study estimation and fusion with linear dynamics in long-haul sensor networks, wherein a number of sensors are remotely deployed over a large geographical area for performing tasks such as target tracking, and a remote fusion center serves to combine the information provided by these sensors in order to improve the overall tracking accuracy. In reality, the motion of a dynamic target might be subject to certain constraints, for instance, those defined by a road network. We explore the accuracy performance of projection-based constrained estimation and fusion methods that is affected by information loss over the long-haul links. We use an example to compare the tracking errors under various implementations of centralized and distributed projection-based estimation and fusion methods and demonstrate the effectiveness of using projection-based methods in these settings.
Parameter estimation in the presence of the most general Gaussian dissipative reservoir
Jarzyna, Marcin; Zwierz, Marcin
2017-01-01
We analyze the performance of quantum parameter estimation in the presence of the most general Gaussian dissipative reservoir. We derive lower bounds on the precision of phase estimation and a closely related problem of frequency estimation. For both problems we show that it is impossible to achieve the Heisenberg limit asymptotically in the presence of such a reservoir. However, we also find that for any fixed number of probes used in the setup there exists a Gaussian dissipative reservoir, which, in principle, allows for the Heisenberg-limited performance for that number of probes. We discuss a realistic implementation of a frequency estimation scheme in the presence of a Gaussian dissipative reservoir in a cavity system.
The NASA Environmentally Responsible Aviation Project/General Electric Open Rotor Test Campaign
Van Zante, Dale
2013-01-01
The Open Rotor is a modern version of the UnDucted Fan (UDF) that was flight tested in the late 1980's through a partnership between NASA and General Electric (GE). Tests were conducted in the 9'x15' Low Speed Wind Tunnel and the 8'x6' Supersonic Wind Tunnel starting in late 2009 and completed in early 2012. Aerodynamic and acoustic data were obtained for takeoff, approach and cruise simulations. GE was the primary partner, but other organizations were involved such as Boeing and Airbus who provided additional hardware for fuselage simulations. This test campaign provided the acoustic and performance characteristics for modern open rotor blades designs." NASA and GE conducted joint systems analysis to evaluate how well new blade designs would perform on a B737 class aircraft, and compared the results to an advanced higher bypass ratio turbofan." Acoustic shielding experiments were performed at NASA GRC and Boeing LSAF facilities to provide data for noise estimates of unconventional aircraft configurations with Open Rotor propulsion systems." The work was sponsored by NASA's aeronautics programs, including the Subsonic Fixed Wing (SFW) and the Environmentally Responsible Aviation (ERA) projects."
Early cost estimating for road construction projects using multiple regression techniques
Ibrahim Mahamid
2011-12-01
Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.
Cost Estimation of Web Projects inContext with Agile Paradigm: Improvements and Validation
2013-01-01
Agile practitioners have expressed concern over their inability to correctly estimate costs associated with Agile web software development. This concern has become even more critical as costs associated with development continue to increase. As a result, significant research attention is now intended for gaining a better understanding of the web based projects in context with Agile software-development process as well as constructing and evaluating calibrated software cost estimating tools. T...
Song, Kai-Sheng
2008-08-01
Many applications in real-time signal, image, and video processing require automatic algorithms for rapid characterizations of signals and images through fast estimation of their underlying statistical distributions. We present fast and globally convergent algorithms for estimating the three-parameter generalized gamma distribution (G Gamma D). The proposed method is based on novel scale-independent shape estimation (SISE) equations. We show that the SISE equations have a unique global root in their semi-infinite domains and the probability that the sample SISE equations have a unique global root tends to one. The consistency of the global root, its scale, and index shape estimators is obtained. Furthermore, we establish that, with probability tending to one, Newton-Raphson (NR) algorithms for solving the sample SISE equations converge globally to the unique root from any initial value in its given domain. In contrast to existing methods, another remarkable novelty is that the sample SISE equations are completely independent of gamma and polygamma functions and involve only elementary mathematical operations, making the algorithms well suited for real-time both hardware and software implementations. The SISE estimators also allow the maximum likelihood (ML) ratio procedure to be carried out for testing the generalized Gaussian distribution (GGD) versus the G Gamma D. Finally, the fast global convergence and accuracy of our algorithms for finite samples are demonstrated by both simulation studies and real image analysis.
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Risk Consideration and Cost Estimation in Construction Projects Using Monte Carlo Simulation
Claudius A. Peleskei
2015-06-01
Full Text Available Construction projects usually involve high investments. It is, therefore, a risky adventure for companies as actual costs of construction projects nearly always exceed the planed scenario. This is due to the various risks and the large uncertainty existing within this industry. Determination and quantification of risks and their impact on project costs within the construction industry is described to be one of the most difficult areas. This paper analyses how the cost of construction projects can be estimated using Monte Carlo Simulation. It investigates if the different cost elements in a construction project follow a specific probability distribution. The research examines the effect of correlation between different project costs on the result of the Monte Carlo Simulation. The paper finds out that Monte Carlo Simulation can be a helpful tool for risk managers and can be used for cost estimation of construction projects. The research has shown that cost distributions are positively skewed and cost elements seem to have some interdependent relationships.
Varneskov, Rasmus T.
2014-01-01
This paper analyzes a generalized class of flat-top realized kernels for estimation ot the quadratic variation spectrum,i.e. the decomposition of quadratic variation into integrated variance and jump variation, when the underlying, efficient price process is contaminated by addictive noise......-top estimators are shown to be consistent, asymptotically unbiased, and mixed Gaussian at the optimal rate of convergence, n1/4. Exact bound on lower order terms are obtained using maximal inequalities and these are used to derive a conservative, MSE-optimal flat-top shrinkage. Additionally, bounds...
The Connection Between the Metric and Generalized Projection Operators in Banach Spaces
Yakov ALBER; Jin Lu LI
2007-01-01
In this paper we study the connection between the metric projection operator PK:B→K,where B is a reflexive Banach space with dual space B* and K is a non-empty closed convex subset of B, and the generalized projection oprators Πk:B→k and πk:B*→k.We also present some results in non-reflexive Banach spaces.
Junwei Sun
2014-01-01
Full Text Available Some important dynamical properties of the memristor chaotic oscillator system have been studied in the paper. A novel hybrid dislocated control method and a general hybrid projective dislocated synchronization scheme have been realized for memristor chaotic oscillator system. The paper firstly presents hybrid dislocated control method for stabilizing chaos to the unstable equilibrium point. Based on the Lyapunov stability theorem, general hybrid projective dislocated synchronization has been studied for the drive memristor chaotic oscillator system and the same response memristor chaotic oscillator system. For the different dimensions, the memristor chaotic oscillator system and the other chaotic system have realized general hybrid projective dislocated synchronization. Numerical simulations are given to show the effectiveness of these methods.
Bouleux, Guillaume
2013-12-01
Diagnosing defects on rotating machines can be reached by several angles. When dealing with asynchronous motor drive, such physical elements rotate that a natural angle for treating the healthiness of the motor can be obtained by the use of spectral analysis tools. It is now stated that electrical or mechanical defects, which appear periodically as well, can be retrieved by analyzing the amplitude of particular frequencies inside an estimated power spectrum. When dealing with broken rotor bars detection it is essential to accurately localize the frequencies related to the slip inside the power spectrum. The diagnosis is thereafter made by indicators given with respect to their power. For actual low level of load operations, the supply frequency generally masks the frequencies which could be exploited for indicators. Therefore, we propose to cancel, as well as possible, the contribution of this supply frequency to develop the useful and closely components. The resolution should be very thin for the components to be estimated. In consequence, we use a prior-knowledge subspace-based frequency estimator, already developed in the literature, we complete with an Oblique Projection coupled with a Total Least Squares solution for estimating the power of the resulting estimated frequencies. Finally, we show by means of a real application how it contributes to improve the power spectrum estimation when compared to the FFT or periodogram-based analysis and how the aforementioned power spectrum makes the diagnosis indicator of rotor bars efficient.
Han, Fang; Liu, Han
2016-01-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson’s sample correlation matrix. Although Pearson’s sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall’s tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall’s tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall’s tau correlation matrix and the latent Pearson’s correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of “effective rank” in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a “sign subgaussian condition” which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
Han, Fang; Liu, Han
2017-02-01
Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.
A Boosting Algorithm for Estimating Generalized Propensity Scores with Continuous Treatments.
Zhu, Yeying; Coffman, Donna L; Ghosh, Debashis
2015-03-01
In this article, we study the causal inference problem with a continuous treatment variable using propensity score-based methods. For a continuous treatment, the generalized propensity score is defined as the conditional density of the treatment-level given covariates (confounders). The dose-response function is then estimated by inverse probability weighting, where the weights are calculated from the estimated propensity scores. When the dimension of the covariates is large, the traditional nonparametric density estimation suffers from the curse of dimensionality. Some researchers have suggested a two-step estimation procedure by first modeling the mean function. In this study, we suggest a boosting algorithm to estimate the mean function of the treatment given covariates. In boosting, an important tuning parameter is the number of trees to be generated, which essentially determines the trade-off between bias and variance of the causal estimator. We propose a criterion called average absolute correlation coefficient (AACC) to determine the optimal number of trees. Simulation results show that the proposed approach performs better than a simple linear approximation or L2 boosting. The proposed methodology is also illustrated through the Early Dieting in Girls study, which examines the influence of mothers' overall weight concern on daughters' dieting behavior.
General framework for estimating the ultimate precision limit in noisy quantum-enhanced metrology
Escher, B M; Davidovich, L; 10.1038/nphys1958
2012-01-01
The estimation of parameters characterizing dynamical processes is central to science and technology. The estimation error changes with the number N of resources employed in the experiment (which could quantify, for instance, the number of probes or the probing energy). Typically, it scales as 1/N^(1/2). Quantum strategies may improve the precision, for noiseless processes, by an extra factor 1/N^(1/2). For noisy processes, it is not known in general if and when this improvement can be achieved. Here we propose a general framework for obtaining attainable and useful lower bounds for the ultimate limit of precision in noisy systems. We apply this bound to lossy optical interferometry and atomic spectroscopy in the presence of dephasing, showing that it captures the main features of the transition from the 1/N to the 1/N^(1/2) behaviour as N increases, independently of the initial state of the probes, and even with use of adaptive feedback.
Balancing uncertainty of context in ERP project estimation: an approach and a case study
Daneva, Maia
2010-01-01
The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach -
Balancing uncertainty of context in ERP project estimation: an approach and a case 3 study
Daneva, Maya
2010-01-01
The increasing demand for Enterprise Resource Planning (ERP) solutions as well as the high rates of troubled ERP implementations and outright cancellations calls for developing effort estimation practices to systematically deal with uncertainties in ERP projects. This paper describes an approach - a
The Solution Structure and Error Estimation for The Generalized Linear Complementarity Problem
Tingfa Yan
2014-07-01
Full Text Available In this paper, we consider the generalized linear complementarity problem (GLCP. Firstly, we develop some equivalent reformulations of the problem under milder conditions, and then characterize the solution of the GLCP. Secondly, we also establish the global error estimation for the GLCP by weakening the assumption. These results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS
YUE LI; CHEN XIRU
2005-01-01
For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-09
The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of
P. Ribereau
2008-12-01
Full Text Available Since the pioneering work of Landwehr et al. (1979, Hosking et al. (1985 and their collaborators, the Probability Weighted Moments (PWM method has been very popular, simple and efficient to estimate the parameters of the Generalized Extreme Value (GEV distribution when modeling the distribution of maxima (e.g., annual maxima of precipitations in the Identically and Independently Distributed (IID context. When the IID assumption is not satisfied, a flexible alternative, the Maximum Likelihood Estimation (MLE approach offers an elegant way to handle non-stationarities by letting the GEV parameters to be time dependent. Despite its qualities, the MLE applied to the GEV distribution does not always provide accurate return level estimates, especially for small sample sizes or heavy tails. These drawbacks are particularly true in some non-stationary situations. To reduce these negative effects, we propose to extend the PWM method to a more general framework that enables us to model temporal covariates and provide accurate GEV-based return levels. Theoretical properties of our estimators are discussed. Small and moderate sample sizes simulations in a non-stationary context are analyzed and two brief applications to annual maxima of CO_{2} and seasonal maxima of cumulated daily precipitations are presented.
Global prevalence of diabetes: estimates for 2000 and projections for 2030
Wild, Sarah; Roglic, Gojka; Green, Anders
2004-01-01
Health Organization member states and applied to United Nations’ population estimates for 2000 and 2030. Urban and rural populations were considered separately for developing countries. RESULTS — The prevalence of diabetes for all age-groups worldwide was estimated to be 2.8% in 2000 and 4.4% in 2030......OBJECTIVE — The goal of this study was to estimate the prevalence of diabetes and the number of people of all ages with diabetes for years 2000 and 2030. RESEARCH DESIGN AND METHODS — Data on diabetes prevalence by age and sex from a limited number of countries were extrapolated to all 191 World....... The total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030. The prevalence of diabetes is higher in men than women, but there are more women with diabetes than men. The urban population in developing countries is projected to double between 2000 and 2030...
Projection-Based Linear Constrained Estimation and Fusion over Long-Haul Links
Rao, Nageswara S [ORNL
2016-01-01
In this work, we study estimation and fusion with linear dynamics in long-haul sensor networks, wherein a number of sensors are remotely deployed over a large geographical area for performing tasks such as target tracking, and a remote fusion center serves to combine the information provided by these sensors in order to improve the overall tracking accuracy. In reality, the motion of a dynamic target might be subject to certain constraints, for instance, those defined by a road network. We explore the accuracy performance of projection-based constrained estimation and fusion methods that is affected by information loss over the long-haul links. We use a tracking example to compare the tracking errors under various implementations of centralized and distributed projection-based estimation and fusion methods.
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
无
2004-01-01
［1］McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.［2］Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.［3］Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.［4］Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.［5］Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.［6］Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.［7］Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.［8］Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.
National data analysis of general radiography projection method in medical imaging
Kim, Jung Su; Seo, Deok Nam; Choi, In Seok [Dept. of Bio-Convergence Engineering, Korea University Graduate School, Seoul (Korea, Republic of); and others
2014-09-15
According to database of medical institutions of health insurance review and assessment service in 2013, 1,118 hospitals and clinics have department of radiology in Korea. And there are CT, fluoroscopic and general radiographic equipment in those hospitals. Above all, general radiographic equipment is the most commonly used in the radiology department. And most of the general radiographic equipment are changing the digital radiography system from the film-screen types of the radiography system nowadays. However, most of the digital radiography department are used the film-screen types of the radiography system. Therefore, in this study, we confirmed present conditions of technical items for general radiography used in hospital and research on general radiographic techniques in domestic medical institutions. We analyzed 26 radiography projection method including chest, skull, spine and pelvis which are generally used in the radiography department.
Estimation of land surface evaporation using a generalized nonlinear complementary relationship
Zhang, Lu; Cheng, Lei; Brutsaert, Wilfried
2017-02-01
Evaporation is a key component of the hydrological cycle and affects regional water resources. Although the physics of evaporation is well understood, its estimation in practice remains a challenge. Among available methods for estimating it, the complementary principle of Bouchet has the potential to provide a practical tool for regional water resources assessment. In this study, the generalized nonlinear formulation of this principle by Brutsaert (2015) was tested against evaporation measurements from four flux stations in Australia under different climatic and vegetation conditions. The method was implemented using meteorological data and Class A pan evaporation measurements. After calibration the estimated daily evaporation values were in good agreement with flux station measurements with a mean correlation coefficient of 0.83 and a bias of 4% on average. More accurate estimates of daily evaporation were obtained when the evaporative demand or apparent potential evaporation was determined from the Penman equation instead of from pan evaporation. The obtained parameter values were found to lie well within the ranges of reported values in the literature. Advantages of the method are that only routine meteorological data are required and that it can be used to estimate long-term evaporation trends.
Tribe, Lorena; Cooper, Evan L.
2008-01-01
A well-structured independent literature research project with a poster session was used to introduce students to peer-reviewed literature in a general chemistry course. Overall, students reported an enhanced appreciation of the course due to performing research at some level, using peer-reviewed literature, and presenting their results in a…
Why Don't They Just Give Us Money? Project Cost Estimating and Cost Reporting
Comstock, Douglas A.; Van Wychen, Kristin; Zimmerman, Mary Beth
2015-01-01
Successful projects require an integrated approach to managing cost, schedule, and risk. This is especially true for complex, multi-year projects involving multiple organizations. To explore solutions and leverage valuable lessons learned, NASA's Virtual Project Management Challenge will kick off a three-part series examining some of the challenges faced by project and program managers when it comes to managing these important elements. In this first session of the series, we will look at cost management, with an emphasis on the critical roles of cost estimating and cost reporting. By taking a proactive approach to both of these activities, project managers can better control life cycle costs, maintain stakeholder confidence, and protect other current and future projects in the organization's portfolio. Speakers will be Doug Comstock, Director of NASA's Cost Analysis Division, Kristin Van Wychen, Senior Analyst in the GAO Acquisition and Sourcing Management Team, and Mary Beth Zimmerman, Branch Chief for NASA's Portfolio Analysis Branch, Strategic Investments Division. Moderator Ramien Pierre is from NASA's Academy for Program/Project and Engineering Leadership (APPEL).
Generalized projective synchronization in time-delayed systems: nonlinear observer approach.
Ghosh, Dibakar
2009-03-01
In this paper, we consider the projective-anticipating, projective, and projective-lag synchronization in a unified coupled time-delay system via nonlinear observer design. A new sufficient condition for generalized projective synchronization is derived analytically with the help of Krasovskii-Lyapunov theory for constant and variable time-delay systems. The analytical treatment can give stable synchronization (anticipatory and lag) for a large class of time-delayed systems in which the response system's trajectory is forced to have an amplitude proportional to the drive system. The constant of proportionality is determined by the control law, not by the initial conditions. The proposed technique has been applied to synchronize Ikeda and prototype models by numerical simulation.
Estimate for the decay rate of the general term of the laplace series for the geopotential
Kholshevnikov, K. V.; Shaidulin, V. Sh.
2011-02-01
The exact estimates with respect to the uniform (Chebyshev) norm of the general term V n of the Laplace series in spherical harmonics for the gravitational potential of a planet are known to be functions of the differential properties of distribution of masses. However, it is difficult to use them in practice because the norm for large n is difficult to calculate. The mean square (Euclidean) norm, however, is used almost entirely in applied research, and so the translation of the estimates from one norm to the other can be effected only in particular cases. In the present paper, a power law estimate of the general term of the Laplace series with respect to the mean square norm is obtained, and its parameters are evaluated numerically. For the EGM2008 geopotential model (Pavlis et al., 2008), the exponent ranges between 2.7 and 3.4. Preliminary calculations are made for a model body for which the harmonic factors are known exactly. A cautious conclusion is made: the geopotential model put forward is capable of adequately describing spherical harmonics for n ≤ 1000; for 1000 ≤ n ≤ 2000 the model gives a correct description, at least qualitatively; for larger n the spherical harmonics of the model do not correspond to reality.
PMP Estimations at Sparsely Controlled Andinian Basins and Climate Change Projections
Lagos Zúñiga, M. A.; Vargas, X.
2012-12-01
Probable Maximum Precipitation (PMP) estimation implies an extensive review of hydrometeorological data and understandig of precipitation formation processes. There exists different methodology processes that apply for their estimations and all of them require a good spatial and temporal representation of storms. The estimation of hydrometeorological PMP on sparsely controlled basins is a difficult task, specially if the studied area has an important orographic effect due to mountains and the mixed precipitation occurrence in the most several storms time period, the main task of this study is to propose and estimate PMP in a sparsely controlled basin, affected by abrupt topography and mixed hidrology basin; also analyzing statystic uncertainties estimations and possible climate changes effects in its estimation. In this study the PMP estimation under statistical and hydrometeorological aproaches (watershed-based and traditional depth area duration analysis) was done in a semi arid zone at Puclaro dam in north Chile. Due to the lack of good spatial meteorological representation at the study zone, we propose a methodology to consider the orographic effects of Los Andes due to orographic effects patterns based in a RCM PRECIS-DGF and annual isoyetal maps. Estimations were validated with precipitation patterns for given winters, considering snow route and rainfall gauges at the preferencial wind direction, finding good results. The estimations are also compared with the highest areal storms in USA, Australia, India and China and with frequency analysis in local rain gauge stations in order to decide about the most adequate approach for the study zone. Climate change projections were evaluated with ECHAM5 GCM model, due to its good quality representation in the seasonality and the magnitude of meteorological variables. Temperature projections, for 2040-2065 period, show that there would be a rise in the catchment contributing area that would lead to an increase of the
Yoonseok Shin
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stag...
A Refined Method for Estimating the Annual Extreme Wave Heights at A Project Site
徐德伦; 范海梅; 张军
2003-01-01
This paper presents a refined method for estimating the annual extreme wave heights at a coastal or offshore project site on the basis of the data acquired at some nearby routine hydrographic stations. This method is based on the orthogonality principle in linear mean square estimation of stochastic processes. The error of the method is analyzed and compared with that of the conventional method. It is found that the method is able to effectively reduce the error so long as some feasible measures are adopted. A simulated test of the method has been conducted in a large-scale wind-wave flume. The test results are in good agreement with those given by theoretical error analysis. A scheme to implement the method is proposed on the basis of error analysis. The scheme is so designed as to reduce the estimation error as far as possible. This method is also suitable to utilizing satellite wave data for the estimation.
The use of generalized estimating equations in the analysis of motor vehicle crash data.
Hutchings, Caroline B; Knight, Stacey; Reading, James C
2003-01-01
The purpose of this study was to determine if it is necessary to use generalized estimating equations (GEEs) in the analysis of seat belt effectiveness in preventing injuries in motor vehicle crashes. The 1992 Utah crash dataset was used, excluding crash participants where seat belt use was not appropriate (n=93,633). The model used in the 1996 Report to Congress [Report to congress on benefits of safety belts and motorcycle helmets, based on data from the Crash Outcome Data Evaluation System (CODES). National Center for Statistics and Analysis, NHTSA, Washington, DC, February 1996] was analyzed for all occupants with logistic regression, one level of nesting (occupants within crashes), and two levels of nesting (occupants within vehicles within crashes) to compare the use of GEEs with logistic regression. When using one level of nesting compared to logistic regression, 13 of 16 variance estimates changed more than 10%, and eight of 16 parameter estimates changed more than 10%. In addition, three of the independent variables changed from significant to insignificant (alpha=0.05). With the use of two levels of nesting, two of 16 variance estimates and three of 16 parameter estimates changed more than 10% from the variance and parameter estimates in one level of nesting. One of the independent variables changed from insignificant to significant (alpha=0.05) in the two levels of nesting model; therefore, only two of the independent variables changed from significant to insignificant when the logistic regression model was compared to the two levels of nesting model. The odds ratio of seat belt effectiveness in preventing injuries was 12% lower when a one-level nested model was used. Based on these results, we stress the need to use a nested model and GEEs when analyzing motor vehicle crash data.
Estimating view parameters from random projections for Tomography using spherical MDS
Murugappan Sundar
2010-06-01
Full Text Available Abstract Background During the past decade, the computed tomography has been successfully applied to various fields especially in medicine. The estimation of view angles for projections is necessary in some special applications of tomography, for example, the structuring of viruses using electron microscopy and the compensation of the patient's motion over long scanning period. Methods This work introduces a novel approach, based on the spherical multidimensional scaling (sMDS, which transforms the problem of the angle estimation to a sphere constrained embedding problem. The proposed approach views each projection as a high dimensional vector with dimensionality equal to the number of sampling points on the projection. By using SMDS, then each projection vector is embedded onto a 1D sphere which parameterizes the projection with respect to view angles in a globally consistent manner. The parameterized projections are used for the final reconstruction of the image through the inverse radon transform. The entire reconstruction process is non-iterative and computationally efficient. Results The effectiveness of the sMDS is verified with various experiments, including the evaluation of the reconstruction quality from different number of projections and resistance to different noise levels. The experimental results demonstrate the efficiency of the proposed method. Conclusion Our study provides an effective technique for the solution of 2D tomography with unknown acquisition view angles. The proposed method will be extended to three dimensional reconstructions in our future work. All materials, including source code and demos, are available on https://engineering.purdue.edu/PRECISE/SMDS.
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean-Philippe; Grimmond, Sue; Feigenwinter, Christian; Lindberg, Fredrik; Del Frate, Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2017-04-01
The H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts of UEB fluxes on urban heat island and consequently on energy consumption in cities. In URBANFLUXES, the anthropogenic heat flux is estimated as a residual of UEB. Therefore, the rest UEB components, namely, the net all-wave radiation, the net change in heat storage and the turbulent sensible and latent heat fluxes are independently estimated from Earth Observation (EO), whereas the advection term is included in the error of the anthropogenic heat flux estimation from the UEB closure. The Discrete Anisotropic Radiative Transfer (DART) model is employed to improve the estimation of the net all-wave radiation balance, whereas the Element Surface Temperature Method (ESTM), adjusted to satellite observations is used to improve the estimation the estimation of the net change in heat storage. Furthermore the estimation of the turbulent sensible and latent heat fluxes is based on the Aerodynamic Resistance Method (ARM). Based on these outcomes, QF is estimated by regressing the sum of the turbulent heat fluxes versus the available energy. In-situ flux measurements are used to evaluate URBANFLUXES outcomes, whereas uncertainties are specified and analyzed. URBANFLUXES is expected to prepare the ground for further innovative exploitation of EO in scientific activities (climate variability studies at local and regional scales) and future and emerging applications (sustainable urban planning, mitigation technologies) to benefit climate change mitigation/adaptation. This study presents the results of the second phase of the project and detailed information on URBANFLUXES is available at: http://urbanfluxes.eu
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. We generalize the concept of trimming used by many of these robust estima- tors, such as the least trimmed squares and maximum trimmed likelihood, and propose a general trimmed estimator, which renders
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. We generalize the concept of trimming used by many of these robust estima- tors, such as the least trimmed squares and maximum trimmed likelihood, and propose a general trimmed estimator, which renders
Bias and robustness of uncertainty components estimates in transient climate projections
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias
Estimation of wind velocity over a complex terrain using the Generalized Mapping Regressor
Beccali, M.; Marvuglia, A. [Dipartimento di Ricerche Energetiche ed Ambientali (DREAM), Universita degli Studi di Palermo, Viale delle Scienze - edificio 9, 90128 Palermo (Italy); Cirrincione, G. [Department de Genie Electrique, Universite de Picardie Jules Verne, 33, Rue Saint Leu, 80039 Amiens (France); Serporta, C. [ISSIA-CNR (Institute on Intelligent Systems for the Automation), Section of Palermo, Via Dante12, Palermo (Italy)
2010-03-15
Wind energy evaluation is an important goal in the conversion of energy systems to more environmentally friendly solutions. In this paper, we present a novel approach to wind speed spatial estimation on the isle of Sicily (Italy): an incremental self-organizing neural network (Generalized Mapping Regressor - GMR) is coupled with exploratory data analysis techniques in order to obtain a map of the spatial distribution of the average wind speed over the entire region. First, the topographic surface of the island was modelled using two different neural techniques and by exploiting the information extracted from a digital elevation model of the region. Then, GMR was used for automatic modelling of the terrain roughness. Afterwards, a statistical analysis of the wind data allowed for the estimation of the parameters of the Weibull wind probability distribution function. In the last sections of the paper, the expected values of the Weibull distributions were regionalized using the GMR neural network. (author)
Varneskov, Rasmus T.
2014-01-01
This paper analyzes a generalized class of flat-top realized kernels for estimation ot the quadratic variation spectrum,i.e. the decomposition of quadratic variation into integrated variance and jump variation, when the underlying, efficient price process is contaminated by addictive noise....... The additive noise consists of two orthogonal components, which allows for a-mixing dependent exogenous noise and an asymptoticaly non-degenerate endogenous correlation structure, respectively. Both components may exhibit polynomially decaying autocovariances. In the absence of jumps, the class of flat......-top estimators are shown to be consistent, asymptotically unbiased, and mixed Gaussian at the optimal rate of convergence, n1/4. Exact bound on lower order terms are obtained using maximal inequalities and these are used to derive a conservative, MSE-optimal flat-top shrinkage. Additionally, bounds...
An efficient anti-occlusion depth estimation using generalized EPI representation in light field
Zhu, Hao; Wang, Qing
2016-10-01
Light field cameras have been rapidly developed and are likely to appear in mobile devices in near future. It is essential to develop efficient and robust depth estimation algorithm for mobile applications. However, existing methods are either slow or lack of adaptability to occlusion such that they are not suitable to mobile computing platform. In this paper, we present the generalized EPI representation in light field and formulate it using two linear functions. By combining it with the light field occlusion theory, a highly efficient and anti-occlusion depth estimation algorithm is proposed. Our algorithm outperforms the previous local method, especially in occlusion areas. Experimental results on public light field datasets have demonstrated the effectiveness and efficiency of the proposed algorithm.
Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes
Hong Chang HU; Lei SONG
2014-01-01
The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.
A general model for estimating actual evaporation from non-saturated surfaces
无
2002-01-01
Based on energy balance equation and mass transfer equation, a general model to estimateactual evaporation from non-saturated surfaces was derived. Making use of two concepts, "relativeevaporation" and "relative drying power", a relationship was established to account for the departurefrom saturated conditions. Using this model, the actual evaporation (evapotranspiration) can becalculated without the need of potential evaporation estimation. Furthermore, the model requires onlya few meteorological parameters that are readily and routinely obtainable at standard weather stations.Based on nearly 30 years data of 432 meteorological stations and 512 hydrological stations in China,in combined with GIS, nine typical river basins were selected. Using the data of the selected riverbasins, the model was tested. The results show that the actual evaporation rate can be estimated withan error of less than 10% in most areas of China, except few years in the Yellow River Basin.
Generalized estimation of the ventilatory distribution from the multiple-breath nitrogen washout.
Motta-Ribeiro, Gabriel Casulari; Jandre, Frederico Caetano; Wrigge, Hermann; Giannella-Neto, Antonio
2016-08-02
This work presents a generalized technique to estimate pulmonary ventilation-to-volume (v/V) distributions using the multiple-breath nitrogen washout, in which both tidal volume (V T ) and the end-expiratory lung volume (EELV) are allowed to vary during the maneuver. In addition, the volume of the series dead space (v d ), unlike the classical model, is considered a common series unit connected to a set of parallel alveolar units. The numerical solution for simulated data, either error-free or with the N2 measurement contaminated with the addition of Gaussian random noise of 3 or 5 % standard deviation was tested under several conditions in a computational model constituted by 50 alveolar units with unimodal and bimodal distributions of v/V. Non-negative least squares regression with Tikhonov regularization was employed for parameter retrieval. The solution was obtained with either unconstrained or constrained (V T , EELV and v d ) conditions. The Tikhonov gain was fixed or estimated and a weighting matrix (WM) was considered. The quality of estimation was evaluated by the sum of the squared errors (SSE) (between reference and recovered distributions) and by the deviations of the first three moments calculated for both distributions. Additionally, a shape classification method was tested to identify the solution as unimodal or bimodal, by counting the number of shape agreements after 1000 repetitions. The accuracy of the results showed a high dependence on the noise amplitude. The best algorithm for SSE and moments included the constrained and the WM solvers, whereas shape agreement improved without WM, resulting in 97.2 % for unimodal and 90.0 % for bimodal distributions in the highest noise condition. In conclusion this generalized method was able to identify v/V distributions from a lung model with a common series dead space even with variable V T . Although limitations remain in presence of experimental noise, appropriate combination of processing steps were
2014-01-01
We project the path of the public debt and primary surpluses for a number of countries in the euro area under a fiscal rule based on a set of estimated fiscal policy reaction functions. Our fiscal rule represents a fiscal analogue to a well-known monetary policy rule, and it is calibrated using country-specific as well as euro area-wide parameter estimates. We then forecast the dynamics of the fiscal aggregates under different convergence, growth, and interest rate scenarios and investigate t...
2014-01-01
We project the path of the public debt and primary balances for a number of countries in the euro area under a fiscal rule based on a set of estimated fiscal policy reaction functions. Our fiscal rule represents a fiscal analogue to a well-known monetary policy rule, and it is calibrated using country-specific as well as euro area-wide parameter estimates. We then forecast the dynamics of the fiscal aggregates under different convergence, growth, and interest rate scenarios and investigate th...
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. Motivated by some { the least trimmed squares and maximum trimmed like- lihood estimators { we propose a general trimmed estimator, which uni¯es and extends many existing robust procedures. We derive
Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG
2008-01-01
In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.
Xu Yuhua [College of Information Science and Technology, Donghua University, Shanghai 201620 (China) and Department of Maths, Yunyang Teacher' s College, Hubei 442000 (China)], E-mail: yuhuaxu2004@163.com; Zhou Wuneng [College of Information Science and Technology, Donghua University, Shanghai 201620 (China)], E-mail: wnzhou@163.com; Fang Jianan [College of Information Science and Technology, Donghua University, Shanghai 201620 (China)
2009-11-15
This paper introduces a modified Lue chaotic system, and some basic dynamical properties are studied. Based on these properties, we present hybrid dislocated control method for stabilizing chaos to unstable equilibrium and limit cycle. In addition, based on the Lyapunov stability theorem, general hybrid projective dislocated synchronization (GHPDS) is proposed, which includes complete dislocated synchronization, dislocated anti-synchronization and projective dislocated synchronization as its special item. The drive and response systems discussed in this paper can be strictly different dynamical systems (including different dimensional systems). As examples, the modified Lue chaotic system, Chen chaotic system and hyperchaotic Chen system are discussed. Numerical simulations are given to show the effectiveness of these methods.
Juan J. Cuadrado Gallego; Daniel Rodríguez; Miguel (A)ngel Sicilia; Miguel Garre Rubio; Angel García Crespo
2007-01-01
Parametric software effort estimation models usually consists of only a single mathematical relationship. Withthe advent of software repositories containing data from heterogeneous projects, these types of models suffer from pooradjustment and predictive accuracy. One possible way to alleviate this problem is the use of a set of mathematical equationsobtained through dividing of the historical project datasets according to different parameters into subdatasets called parti-tions. In turn, partitions are divided into clusters that serve as a tool for more accurate models. In this paper, we describethe process, tool and results of such approach through a case study using a publicly available repository, ISBSG. Resultssuggest the adequacy of the technique as an extension of existing single-expression models without making the estimationprocess much more complex that uses a single estimation model. A tool to support the process is also presented.
A fuzzy neural network to estimate at completion costs of construction projects
Morteza Bagherpour
2012-04-01
Full Text Available In construction cost management system, normally earned value management (EVM is applied as an efficient control approach in both status detection and estimation at completion (EAC cost forecasting. The traditional approaches in EAC predictions normally extend the current situation of a project to the future by employing pervious performance factor. The proposed approach of this paper considers both qualitative and quantitative factors affecting the EAC prediction. The proposed approach of this research not only estimates the completion of the project, but also it can generate accurate forecast for the entire future periods using a fuzzy neural network model. The model is also implemented for a real-world case study and yields encouraging preliminary results.
Using Intelligent Techniques in Construction Project Cost Estimation: 10-Year Survey
Abdelrahman Osman Elfaki
2014-01-01
Full Text Available Cost estimation is the most important preliminary process in any construction project. Therefore, construction cost estimation has the lion’s share of the research effort in construction management. In this paper, we have analysed and studied proposals for construction cost estimation for the last 10 years. To implement this survey, we have proposed and applied a methodology that consists of two parts. The first part concerns data collection, for which we have chosen special journals as sources for the surveyed proposals. The second part concerns the analysis of the proposals. To analyse each proposal, the following four questions have been set. Which intelligent technique is used? How have data been collected? How are the results validated? And which construction cost estimation factors have been used? From the results of this survey, two main contributions have been produced. The first contribution is the defining of the research gap in this area, which has not been fully covered by previous proposals of construction cost estimation. The second contribution of this survey is the proposal and highlighting of future directions for forthcoming proposals, aimed ultimately at finding the optimal construction cost estimation. Moreover, we consider the second part of our methodology as one of our contributions in this paper. This methodology has been proposed as a standard benchmark for construction cost estimation proposals.
Estimating customer electricity savings from projects installed by the U.S. ESCO industry
Carvallo, Juan Pablo [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Larsen, Peter H. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Goldman, Charles A. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)
2014-11-25
The U.S. energy service company (ESCO) industry has a well-established track record of delivering substantial energy and dollar savings in the public and institutional facilities sector, typically through the use of energy savings performance contracts (ESPC) (Larsen et al. 2012; Goldman et al. 2005; Hopper et al. 2005, Stuart et al. 2013). This ~$6.4 billion industry, which is expected to grow significantly over the next five years, may play an important role in achieving demand-side energy efficiency under local/state/federal environmental policy goals. To date, there has been little or no research in the public domain to estimate electricity savings for the entire U.S. ESCO industry. Estimating these savings levels is a foundational step in order to determine total avoided greenhouse gas (GHG) emissions from demand-side energy efficiency measures installed by U.S. ESCOs. We introduce a method to estimate the total amount of electricity saved by projects implemented by the U.S. ESCO industry using the Lawrence Berkeley National Laboratory (LBNL) /National Association of Energy Service Companies (NAESCO) database of projects and LBNL’s biennial industry survey. We report two metrics: incremental electricity savings and savings from ESCO projects that are active in a given year (e.g., 2012). Overall, we estimate that in 2012 active U.S. ESCO industry projects generated about 34 TWh of electricity savings—15 TWh of these electricity savings were for MUSH market customers who did not rely on utility customer-funded energy efficiency programs (see Figure 1). This analysis shows that almost two-thirds of 2012 electricity savings in municipal, local and state government facilities, universities/colleges, K-12 schools, and healthcare facilities (i.e., the so-called “MUSH” market) were not supported by a utility customer-funded energy efficiency program.
Special Issue On Estimation Of Baselines And Leakage In CarbonMitigation Forestry Projects
Sathaye, Jayant A.; Andrasko, Kenneth
2006-06-01
There is a growing acceptance that the environmentalbenefits of forests extend beyond traditional ecological benefits andinclude the mitigation of climate change. Interest in forestry mitigationactivities has led to the inclusion of forestry practices at the projectlevel in international agreements. Climate change activities place newdemands on participating institutions to set baselines, establishadditionality, determine leakage, ensure permanence, and monitor andverify a project's greenhouse gas benefits. These issues are common toboth forestry and other types of mitigation projects. They demandempirical evidence to establish conditions under which such projects canprovide sustained long term global benefits. This Special Issue reportson papers that experiment with a range of approaches based on empiricalevidence for the setting of baselines and estimation of leakage inprojects in developing Asia and Latin America.
2007-01-01
Full Text Available Climate data for studies within the SWURVE (Sustainable Water: Uncertainty, Risk and Vulnerability in Europe project, assessing the risk posed by future climatic change to various hydrological and hydraulic systems were obtained from the regional climate model HadRM3H, developed at the Hadley Centre of the UK Met Office. This paper gives some background to HadRM3H; it also presents anomaly maps of the projected future changes in European temperature, rainfall and potential evapotranspiration (PET, estimated using a variant of the Penman formula. The future simulations of temperature and rainfall, following the SRES A2 emissions scenario, suggest that most of Europe will experience warming in all seasons, with heavier precipitation in winter in much of western Europe (except for central and northern parts of the Scandinavian mountains and drier summers in most parts of western and central Europe (except for the north-west and the eastern part of the Baltic Sea. Particularly large temperature anomalies (>6°C are projected for north-east Europe in winter and for southern Europe, Asia Minor and parts of Russia in summer. The projected PET displayed very large increases in summer for a region extending from southern France to Russia. The unrealistically large values could be the result of an enhanced hydrological cycle in HadRM3H, affecting several of the input parameters to the PET calculation. To avoid problems with hydrological modelling schemes, PET was re-calculated, using empirical relationships derived from observational values of temperature and PET.
Salling, Kim Bang; Leleur, Steen
2014-01-01
For decades researchers have claimedthat particularly demand forecasts and construction cost estimations are assigned with/affected by a large degree of uncertainty. Massively, articles,research documents and reports agree that there exists a tendencytowards underestimating the costs and overesti......For decades researchers have claimedthat particularly demand forecasts and construction cost estimations are assigned with/affected by a large degree of uncertainty. Massively, articles,research documents and reports agree that there exists a tendencytowards underestimating the costs...... and overestimating the demand for transport infrastructure projects. It is therefore claimed that ex-anteevaluations of trans- port-related projects are often based on inaccurate material, which ultimately can lead to severe socio- economic misperformance. This paper seeks to bridge the gap between the inaccuracies...... in demand and cost estimations and hence the evaluation of transport infrastructure projects. Currently, research within this area is scarce and scattered with no commonagreement on how to embed and operationalise the huge amount of empiricaldata that exist within the frame of Optimism Bias. Therefore...
Skirbekk, Vegard; Potancoková, Michaela; Hackett, Conrad; Stonawski, Marcin
2016-11-09
The religious landscape of older adults around the world is changing profoundly. Yet until now, no study has chronicled these changes or compared expected aging patterns of religious groups. Differential aging among religious groups can have important economic and social consequences. This study estimates and projects the future religious composition by age at the global and regional levels. This study presents estimates of age structures by religion for 2010 and projections until 2050. It is based on analyses of more than 2,500 censuses, registers, and surveys from 198 countries. Regional and global results are the aggregate of demographic projections carried out at the country level. In 2010, Muslims were least likely to be aged 60 or older (7% of all Muslims), and Jews were most likely to be in this age group (20% of all Jews). By 2050, we project that Buddhists and the religiously unaffiliated will have the oldest populations (both will have 32% above the age of 60), whereas Muslims will remain the youngest religious group (with only 16% above the age of 60). Christians will, globally, age relatively slowly, from 14% to 21% above the age of 60 from 2010 to 2050. The religious landscape among the world's seniors will change fundamentally in the coming years, due to the combination of rapid aging among the religiously unaffiliated and Buddhist populations and the persistence of relatively young age structures among Muslims and Christians, which are the dominant religions in Africa.
Tomáš Bayer
2017-03-01
Full Text Available Modern techniques for the map analysis allow for the creation of full or partial geometric reconstruction of its content. The projection is described by the set of estimated constant values: transformed pole position, standard parallel latitude, longitude of the central meridian, and a constant parameter. Analogously the analyzed map is represented by its constant values: auxiliary sphere radius, origin shifts, and angle of rotation. Several new methods denoted as M6-M9 for the estimation of an unknown map projection and its parameters differing in the number of determined parameters, reliability, robustness, and convergence have been developed. However, their computational demands are similar. Instead of directly measuring the dissimilarity of two projections, the analyzed map in an unknown projection and the image of the sphere in the well-known (i.e., analyzed projection are compared. Several distance functions for the similarity measurements based on the location as well as shape similarity approaches are proposed. An unconstrained global optimization problem poorly scaled, with large residuals, for the vector of unknown parameters is solved by the hybrid BFGS method. To avoid a slower convergence rate for small residual problems, it has the ability to switch between first- and second-order methods. Such an analysis is beneficial and interesting for historic, old, or current maps without information about the projection. Its importance is primarily referred to refinement of spatial georeference for the medium- and small-scale maps, analysis of the knowledge about the former world, analysis of the incorrectly/inaccurately drawn regions, and appropriate cataloging of maps. The proposed algorithms have been implemented in the new version of the detectproj software.
Generalized Projective Synchronization of Fractional Order Chaotic Systems with Different Dimensions
WANG Sha; YU Yong-Guang
2012-01-01
The generalized projective synchronization of different dimensional fractional order chaotic systems is investigated. According to the stability theory of linear fractional order systems, a sufficient condition to realize synchronization is obtained. The fractional order chaotic and hyperchaotic systems are applied to achieve synchronization in both reduced and increased dimensions. The corresponding numerical results coincide with theoretical analysis.%The generalized projective synchronization of different dimensional fractional order chaotic systems is investigated.According to the stability theory of linear fractional order systems,a sufficient condition to realize synchronization is obtained.The fractional order chaotic and hyperchaotic systems are applied to achieve synchronization in both reduced and increased dimensions.The corresponding numerical results coincide with theoretical analysis.
Generalized Projective Synchronization between Two Complex Networks with Time-Varying Coupling Delay
SUN Mei; ZENG Chang-Yan; TIAN Li-Xin
2009-01-01
Generalized projective synchronization (GPS) between two complex networks with time-varying coupling delay is investigated. Based on the Lyapunov stability theory, a nonlinear controller and adaptive updated laws are designed. Feasibility of the proposed scheme is proven in theory. Moreover, two numerical examples are presented, using the energy resource system and Lü's system [Physica A 382 (2007) 672] as the nodes of the networks. GPS between two energy resource complex networks with time-varying coupling delay is achieved. This study can widen the application range of the generalized synchronization methods and will be instructive for the demand-supply of energy resource in some regions of China.
THE EFFECT OF PROJECTIONS ON BOTH MEASURES AND THE GENERALIZATION OF q-DIMENSION CAPACITY
Selmi, Bilel
2016-01-01
In this paper, we are concerned with the properties of the generalization of the L q-spectrum relatively to two Borel probability measures having the same compact support and we are interested in the generalized of q-dimension Riesz capacity and also in the study of their behaviors under orthogonal projections.; Dans cette article, on s'intéresse en premier lieu par les propriétés de la généralisation du L q-spectre relativement a deux mesures de probabilité de Borel ayant le même support com...
Sun, Pengfei; Sun, Changku; Li, Wenqiang; Wang, Peng
2015-01-01
Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI), an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF) motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Pengfei Sun
Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.
Yi, Feng; Sun, Chao; Bai, Xiao-Hui
2012-11-01
A new signal-subspace high-resolution bearing estimation method based on the orthogonal projections technique is proposed in this paper. Firstly, the received data are calculated step by step to form a set of basis vectors for the signal-subspace, utilizing an orthogonal projections algorithm that does not construct and eigen-decompose the covariance matrix. This procedure retains a linear complexity in computation and guarantees maximum signal energy in the spanned signal-subspace. Then the algorithm exploits the singular value decomposition of the matrix, comprised of the signal-subspace and the modal subspace that is obtained also from the received data, and the source bearings are estimated by detecting the intersection between the estimated signal-subspace and the modal subspace. The computational complexity of the proposed method is compared to that of the subspace intersection method, and its performance is compared to that of the conventional bearing estimation method, including conventional beamforming (CBF), and minimum variance distortionless response beamforming (MVDR). The performance of the proposed method under different condition such as sensor number, sensor inter-space, received signal-noise ratio (SNR), snapshot number is also investigated. Numerical simulation results in typical shallow water demonstrate the effectiveness of the proposed method.
A general method for parameter estimation in light-response models.
Chen, Lei; Li, Zhong-Bin; Hui, Cang; Cheng, Xiaofei; Li, Bai-Lian; Shi, Pei-Jian
2016-06-13
Selecting appropriate initial values is critical for parameter estimation in nonlinear photosynthetic light response models. Failed convergence often occurs due to wrongly selected initial values when using currently available methods, especially the kind of local optimization. There are no reliable methods that can resolve the conundrum of selecting appropriate initial values. After comparing the performance of the Levenberg-Marquardt algorithm and other three algorithms for global optimization, we develop a general method for parameter estimation in four photosynthetic light response models, based on the use of Differential Evolution (DE). The new method was shown to successfully provide good fits (R(2) > 0.98) and robust parameter estimates for 42 datasets collected for 21 plant species under the same initial values. It suggests that the DE algorithm can efficiently resolve the issue of hyper initial-value sensitivity when using local optimization methods. Therefore, the DE method can be applied to fit the light-response curves of various species without considering the initial values.
Generalized Cross Entropy Method for estimating joint distribution from incomplete information
Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.
2016-07-01
Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as "Generalized Cross Entropy Method" (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.
A general method for parameter estimation in light-response models
Chen, Lei; Li, Zhong-Bin; Hui, Cang; Cheng, Xiaofei; Li, Bai-Lian; Shi, Pei-Jian
2016-06-01
Selecting appropriate initial values is critical for parameter estimation in nonlinear photosynthetic light response models. Failed convergence often occurs due to wrongly selected initial values when using currently available methods, especially the kind of local optimization. There are no reliable methods that can resolve the conundrum of selecting appropriate initial values. After comparing the performance of the Levenberg–Marquardt algorithm and other three algorithms for global optimization, we develop a general method for parameter estimation in four photosynthetic light response models, based on the use of Differential Evolution (DE). The new method was shown to successfully provide good fits (R2 > 0.98) and robust parameter estimates for 42 datasets collected for 21 plant species under the same initial values. It suggests that the DE algorithm can efficiently resolve the issue of hyper initial-value sensitivity when using local optimization methods. Therefore, the DE method can be applied to fit the light-response curves of various species without considering the initial values.
Time of Arrival Estimation in Probability-Controlled Generalized CDMA Systems
Hagit Messer
2007-11-01
Full Text Available In recent years, more and more wireless communications systems are required to provide also a positioning measurement. In code division multiple access (CDMA communication systems, the positioning accuracy is significantly degraded by the multiple access interference (MAI caused by other users in the system. This MAI is commonly managed by a power control mechanism, and yet, MAI has a major effect on positioning accuracy. Probability control is a recently introduced interference management mechanism. In this mechanism, a user with excess power chooses not to transmit some of its symbols. The information in the nontransmitted symbols is recovered by an error-correcting code (ECC, while all other users receive a more reliable data during these quiet periods. Previous research had shown that the implementation of a probability control mechanism can significantly reduce the MAI. In this paper, we show that probability control also improves the positioning accuracy. We focus on time-of-arrival (TOA based positioning systems. We analyze the TOA estimation performance in a generalized CDMA system, in which the probability control mechanism is employed, where the transmitted signal is noncontinuous with a symbol transmission probability smaller than 1. The accuracy of the TOA estimation is determined using appropriate modifications of the Cramer-Rao bound on the delay estimation. Keeping the average transmission power constant, we show that the TOA accuracy of each user does not depend on its transmission probability, while being a nondecreasing function of the transmission probability of any other user. Therefore, a generalized, noncontinuous CDMA system with a probability control mechanism can always achieve better positioning performance, for all users in the network, than a conventional, continuous, CDMA system.
Theocharis Theofanidis
2016-01-01
Full Text Available Real hypersurfaces satisfying the condition ϕl=lϕ(l=R(·,ξξ have been studied by many authors under at least one more condition, since the class of these hypersurfaces is quite tough to be classified. The aim of the present paper is the classification of real hypersurfaces in complex projective plane CP2 satisfying a generalization of ϕl=lϕ under an additional restriction on a specific function.
Channel Selection and Feature Projection for Cognitive Load Estimation Using Ambulatory EEG
Tian Lan
2007-01-01
Full Text Available We present an ambulatory cognitive state classification system to assess the subject's mental load based on EEG measurements. The ambulatory cognitive state estimator is utilized in the context of a real-time augmented cognition (AugCog system that aims to enhance the cognitive performance of a human user through computer-mediated assistance based on assessments of cognitive states using physiological signals including, but not limited to, EEG. This paper focuses particularly on the offline channel selection and feature projection phases of the design and aims to present mutual-information-based techniques that use a simple sample estimator for this quantity. Analyses conducted on data collected from 3 subjects performing 2 tasks (n-back/Larson at 2 difficulty levels (low/high demonstrate that the proposed mutual-information-based dimensionality reduction scheme can achieve up to 94% cognitive load estimation accuracy.
Estimating landslide losses - preliminary results of a seven-State pilot project
Highland, Lynn M.
2006-01-01
reliable information on economic losses associated with landslides. Each State survey examined the availability, distribution, and inherent uncertainties of economic loss data in their study areas. Their results provide the basis for identifying the most fruitful methods of collecting landslide loss data nationally, using methods that are consistent and provide common goals. These results can enhance and establish the future directions of scientific investigation priorities by convincingly documenting landslide risks and consequences that are universal throughout the 50 States. This report is organized as follows: A general summary of the pilot project history, goals, and preliminary conclusions from the Lincoln, Neb. workshop are presented first. Internet links are then provided for each State report, which appear on the internet in PDF format and which have been placed at the end of this open-file report. A reference section follows the reports, and, lastly, an Appendix of categories of landslide loss and sources of loss information is included for the reader's information. Please note: The Oregon Geological Survey has also submitted a preliminary report on indirect loss estimation methodology, which is also linked with the others. Each State report is unique and presented in the form in which it was submitted, having been independently peer reviewed by each respective State survey. As such, no universal 'style' or format has been adopted as there have been no decisions on which inventory methods will be recommended to the 50 states, as of this writing. The reports are presented here as information for decision makers, and for the record; although several reports provide recommendations on inventory methods that could be adopted nationwide, currently no decisions have been made on adopting a uniform methodology for the States.
Brekke, L.D.; Dettinger, M.D.; Maurer, E.P.; Anderson, M.
2008-01-01
Ensembles of historical climate simulations and climate projections from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 3 (CMIP3) multi-model dataset were investigated to determine how model credibility affects apparent relative scenario likelihoods in regional risk assessments. Methods were developed and applied in a Northern California case study. An ensemble of 59 twentieth century climate simulations from 17 WCRP CMIP3 models was analyzed to evaluate relative model credibility associated with a 75-member projection ensemble from the same 17 models. Credibility was assessed based on how models realistically reproduced selected statistics of historical climate relevant to California climatology. Metrics of this credibility were used to derive relative model weights leading to weight-threshold culling of models contributing to the projection ensemble. Density functions were then estimated for two projected quantities (temperature and precipitation), with and without considering credibility-based ensemble reductions. An analysis for Northern California showed that, while some models seem more capable at recreating limited aspects twentieth century climate, the overall tendency is for comparable model performance when several credibility measures are combined. Use of these metrics to decide which models to include in density function development led to local adjustments to function shapes, but led to limited affect on breadth and central tendency, which were found to be more influenced by 'completeness' of the original ensemble in terms of models and emissions pathways. ?? 2007 Springer Science+Business Media B.V.
Experimental Test of the State Estimation-Reversal Tradeoff Relation in General Quantum Measurements
Geng Chen
2014-06-01
Full Text Available When a measurement has limited strength, only partial information, regarding the initial state, is extracted, and, correspondingly, there is a probability to reverse its effect on the system and retrieve the original state. Recently, a clear and direct quantitative description of this complementary relationship, in terms of a tradeoff relation, was developed by Y. K. Cheong and S. W. Lee. [Phys. Rev. Lett. 109, 150402 (2012]. Here, this tradeoff relation is experimentally verified using polarization-encoded single photons from a quantum dot. Measurement operators representing a complete range, from not affecting the system to a projection to a single polarization state, are realized. In addition, for each measurement operator, an optimal reversal operator is also implemented. The upper bound of the tradeoff relation is mapped to experimental parameters representing the measurement strength. Our results complement the theoretical work and provide a hands-on characterization of general quantum measurements.
Budic, Lara; Didenko, Gregor; Dormann, Carsten F
2016-01-01
In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be
Nikoloulopoulos, Aristidis K
2016-06-30
The method of generalized estimating equations (GEE) is popular in the biostatistics literature for analyzing longitudinal binary and count data. It assumes a generalized linear model for the outcome variable, and a working correlation among repeated measurements. In this paper, we introduce a viable competitor: the weighted scores method for generalized linear model margins. We weight the univariate score equations using a working discretized multivariate normal model that is a proper multivariate model. Because the weighted scores method is a parametric method based on likelihood, we propose composite likelihood information criteria as an intermediate step for model selection. The same criteria can be used for both correlation structure and variable selection. Simulations studies and the application example show that our method outperforms other existing model selection methods in GEE. From the example, it can be seen that our methods not only improve on GEE in terms of interpretability and efficiency but also can change the inferential conclusions with respect to GEE. Copyright © 2016 John Wiley & Sons, Ltd.
A Novel Method Based on Oblique Projection Technology for Mixed Sources Estimation
Weijian Si
2014-01-01
Full Text Available Reducing the computational complexity of the near-field sources and far-field sources localization algorithms has been considered as a serious problem in the field of array signal processing. A novel algorithm caring for mixed sources location estimation based on oblique projection is proposed in this paper. The sources are estimated at two different stages and the sensor noise power is estimated and eliminated from the covariance which improve the accuracy of the estimation of mixed sources. Using the idea of compress, the range information of near-field sources is obtained by searching the partial area instead of the whole Fresnel area which can reduce the processing time. Compared with the traditional algorithms, the proposed algorithm has the lower computation complexity and has the ability to solve the two closed-spaced sources with high resolution and accuracy. The duplication of range estimation is also avoided. Finally, simulation results are provided to demonstrate the performance of the proposed method.
Tagade, Piyush; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin
2017-03-01
A novel approach for integrating a pseudo-two dimensional electrochemical thermal (P2D-ECT) model and data assimilation algorithm is presented for lithium-ion cell state estimation. This approach refrains from making any simplifications in the P2D-ECT model while making it amenable for online state estimation. Though deterministic, uncertainty in the initial states induces stochasticity in the P2D-ECT model. This stochasticity is resolved by spectrally projecting the stochastic P2D-ECT model on a set of orthogonal multivariate Hermite polynomials. Volume averaging in the stochastic dimensions is proposed for efficient numerical solution of the resultant model. A state estimation framework is developed using a transformation of the orthogonal basis to assimilate the measurables with this system of equations. Effectiveness of the proposed method is first demonstrated by assimilating the cell voltage and temperature data generated using a synthetic test bed. This validated method is used with the experimentally observed cell voltage and temperature data for state estimation at different operating conditions and drive cycle protocols. The results show increased prediction accuracy when the data is assimilated every 30s. High accuracy of the estimated states is exploited to infer temperature dependent behavior of the lithium-ion cell.
Russell, E.W.; Clarke, W. [Lawrence Livermore National Lab., CA (United States); Domian, H.A. [Babcock and Wilcox Co., Lynchburg, VA (United States); Madson, A.A. [Kaiser Engineers California Corp., Oakland, CA (United States)
1991-08-01
This report summarizes the bottoms-up cost estimates for fabrication of high-level radioactive waste disposal containers based on the Site Characterization Plan Conceptual Design (SCP-CD). These estimates were acquired by Babcock and Wilcox (B&S) under sub-contract to Lawrence Livermore National Laboratory (LLNL) for the Yucca Mountain Site Characterization Project (YMP). The estimates were obtained for two leading container candidate materials (Alloy 825 and CDA 715), and from other three vendors who were selected from a list of twenty solicited. Three types of container designs were analyzed that represent containers for spent fuel, and for vitrified high-level waste (HLW). The container internal structures were assumed to be AISI-304 stainless steel in all cases, with an annual production rate of 750 containers. Subjective techniques were used for estimating QA/QC costs based on vendor experience and the specifications derived for the LLNL-YMP Quality Assurance program. In addition, an independent QA/QC analysis is reported which was prepared by Kasier Engineering. Based on the cost estimates developed, LLNL recommends that values of $825K and $62K be used for the 1991 TSLCC for the spent fuel and HLW containers, respectively. These numbers represent the most conservative among the three vendors, and are for the high-nickel anstenitic steel (Alloy 825). 6 refs., 7 figs.
FEATURES OF AN ESTIMATION OF INVESTMENT PROJECTS AT THE ENTERPRISES OF AVIATION INSTRUMENT
Petr P. Dobrov
2016-01-01
Full Text Available The relevance of this study due to the fact that the current situation in Russia is complemented by the negative effects of market reforms in the economy and economic sanctions adopted against our country and in particular the different level companies. In view of this, to effectively manage the activities and the development of aviation instrument companies and enterprises of different ownership forms are highly relevant issues related to the assessment of investment projects. The general crisis that engulfed almost all industry in Russia, demanded the application of a new ideology of the organization and management of investment projects, as well as their assessment at the enterprises of aviation instrument. In Russia, began a new stage in the development of project management establishment of a domestic methodology, complex tools and training for professional project management on the basis of domestic achievements, global experience and creativity of its processing based on the actual conditions of our country. The need for the use of project management methodology in Russia is determined by two factors: the increasing complexity of projects and the organizations that operate them, and the fact that project management is widely used in countries with market economies. Projects at the enterprises of aviation instrument making and evaluation are characterized by complexity and uncertainty, a significant dependence on the dynamic environment, including socio-economic, political, financial, economic, legislative influence of both the state and competing companies. In this paper, a study of modern methods of evaluating investment projects at the enterprises of aviation instrument. Methodology. The methodological basis of this paper appeared comparative and economic-mathematical analysis methods. Results. As part of the presentation of the present article the author, it was found that the activity of modern companies is not linear and is
The Object Projection Feature Estimation Problem in Unsupervised Markerless 3D Motion Tracking
Quesada, Luis
2011-01-01
3D motion tracking is a critical task in many computer vision applications. Existing 3D motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on 3D motion tracking. 3D motion tracking systems that require no knowledge on the target object and run on a single low-budget camera require estimations of the object projection features (namely, area and position). In this paper, we define the object projection feature estimation problem and we present a novel 3D motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera, as installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a non-modeled unmarked object that may be non-rigid, non-convex, partially occluded, self occluded, or motion blurred, given that it is opaque, evenly colored, and enough contrasting with t...
Anthropogenic heat flux estimation from space: results of the first phase of the URBANFLUXES project
Chrysoulakis, Nektarios; Marconcini, Mattia; Gastellu-Etchegorry, Jean-Philippe; Grimmond, C. S. B.; Feigenwinter, Christian; Lindberg, Fredrik; Del Frate, Fabio; Klostermann, Judith; Mitraka, Zina; Esch, Thomas; Landier, Lucas; Gabey, Andy; Parlow, Eberhard; Olofson, Frans
2016-10-01
H2020-Space project URBANFLUXES (URBan ANthrpogenic heat FLUX from Earth observation Satellites) investigates the potential of Copernicus Sentinels to retrieve anthropogenic heat flux, as a key component of the Urban Energy Budget (UEB). URBANFLUXES advances the current knowledge of the impacts of UEB fluxes on urban heat island and consequently on energy consumption in cities. This will lead to the development of tools and strategies to mitigate these effects, improving thermal comfort and energy efficiency. In URBANFLUXES, the anthropogenic heat flux is estimated as a residual of UEB. Therefore, the rest UEB components, namely, the net all-wave radiation, the net change in heat storage and the turbulent sensible and latent heat fluxes are independently estimated from Earth Observation (EO), whereas the advection term is included in the error of the anthropogenic heat flux estimation from the UEB closure. The project exploits Sentinels observations, which provide improved data quality, coverage and revisit times and increase the value of EO data for scientific work and future emerging applications. These observations can reveal novel scientific insights for the detection and monitoring of the spatial distribution of the urban energy budget fluxes in cities, thereby generating new EO opportunities. URBANFLUXES thus exploits the European capacity for space-borne observations to enable the development of operational services in the field of urban environmental monitoring and energy efficiency in cities.
Klusák J.
2009-12-01
Full Text Available The study of bi-material notches becomes a topical problem as they can model efficiently geometrical or material discontinuities. When assessing crack initiation conditions in the bi-material notches, the generalized stress intensity factors H have to be calculated. Contrary to the determination of the K-factor for a crack in an isotropic homogeneous medium, for the ascertainment of the H-factor there is no procedure incorporated in the calculation systems. The calculation of these fracture parameters requires experience. Direct methods of estimation of H-factors need choosing usually length parameter entering into calculation. On the other hand the method combining the application of the reciprocal theorem (Ψ-integral and FEM does not require entering any length parameter and is capable to extract the near-tip information directly from the far-field deformation.
Shi Haixia; Xiu Jigang; Chen Zhangli; Wang Qincai; Hua Wei
2010-01-01
Generalized Inversion Method has been used to estimate the spatial variation of site effects,using the digital data of SH-waves recorded by 63 stations in the Capital Circle Region of China from 2001 to 2006.We gained the site effects of all stations participating in the calculation.We found that the site effect of rock was stabile and about 1.0 from 1.0Hz to10.0Hz,while the site effect of deposit was high in low frequencies,about 3～7 from1.0Hz to 8.0Hz,and the site effect was protuberant at about 5.0Hz,then fell as the frequency increased.The result shows the shape and intensity of station site effects are mainly influenced by the lithology below the station,and possibly also by the local geological structure.
Cummins, Patrick F.; Masson, Diane; Saenko, Oleg A.
2016-06-01
The net heat uptake by the ocean in a changing climate involves small imbalances between the advective and diffusive processes that transport heat vertically. Generally, it is necessary to rely on global climate models to study these processes in detail. In the present study, it is shown that a key component of the vertical heat flux, namely that associated with the large-scale mean vertical circulation, can be diagnosed over extra-tropical regions from global observational data sets. This component is estimated based on the vertical velocity obtained from the geostrophic vorticity balance, combined with estimates of absolute geostrophic flow. Results are compared with the output of a non-eddy resolving, coupled atmosphere-ocean general circulation model. Reasonable agreement is found in the latitudinal distribution of the vertical heat flux, as well as in the area-integrated flux below about 250 m depth. The correspondence with the coupled model deteriorates sharply at depths shallower than 250 m due to the omission of equatorial regions from the calculation. The vertical heat flux due to the mean circulation is found to be dominated globally by the downward contribution from the Southern Hemisphere, in particular the Southern Ocean. This is driven by the Ekman vertical velocity which induces an upward transport of seawater that is cold relative to the horizontal average at a given depth. The results indicate that the dominant characteristics of the vertical transport of heat due to the mean circulation can be inferred from simple linear vorticity dynamics over much of the ocean.
Ana Calabrese
Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
Li, Teng; Hong, Daxiang
2016-04-01
From three interferograms, a novel algorithm for extracting phase shifts based on the vector projection of normalized difference maps is presented. In it, subtraction and vector normalization are operated successively to obtain two normalized interferogram differences without the effect of background component. Then, the phase shift can be estimated based on the analysis and calculation of the vector projection. Without any iteration and complex calculation, this algorithm can be implemented for phase-shift range approximately being well distributed from 0 to 2π, when fringe number of interferograms is more than one. It offers a powerful tool for rapid calibration of phase shifts because of its high efficiency and easy implementation. Numerical simulations and experiments are performed to prove its validity.
Localization of deformable tumors from short-arc projections using Bayesian estimation
Hoegele, W.; Zygmanski, P.; Dobler, B.; Kroiss, M.; Koelbl, O.; Loeschel, R. [Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany) and Department of Computer Science and Mathematics, University of Applied Sciences, 93053 Regensburg (Germany); Department of Radiation Oncology, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States); Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany); Department of Radiation Oncology, Hospital of the Sisters of Mercy, 4010 Linz (Austria); Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany); Department of Computer Science and Mathematics, University of Applied Sciences, 93053 Regensburg (Germany)
2012-12-15
Purpose: The authors present a stochastic framework for radiotherapy patient positioning directly utilizing radiographic projections. This framework is developed to be robust against anatomical nonrigid deformations and to cope with challenging imaging scenarios, involving only a few cone beam CT projections from short arcs. Methods: Specifically, a Bayesian estimator (BE) is explicitly derived for the given scanning geometry. This estimator is compared to reference methods such as chamfer matching (CM) and the minimization of the median absolute error adapted as tools of robust image processing and statistics. In order to show the performance of the stochastic short-arc patient positioning method, a CIRS IMRT thorax phantom study is presented with movable markers and the utilization of an Elekta Synergy{sup Registered-Sign} XVI system. Furthermore, a clinical prostate CBCT scan of a Varian{sup Registered-Sign} On-Board Imager{sup Registered-Sign} system is utilized to investigate the robustness of the method for large variations of image quality (anterior-posterior vs lateral views). Results: The results show that the BE shifts reduce the initial setup error of up to 3 cm down to 3 mm at maximum for an imaging arc as short as 10 Degree-Sign while CM achieves residual errors of 7 mm at maximum only for arcs longer than 40 Degree-Sign . Furthermore, the BE can compensate robustly for low image qualities using several low quality projections simultaneously. Conclusions: In conclusion, an estimation method for marker-based patient positioning for short imaging arcs is presented and shown to be robust and accurate for deformable anatomies.
Porter, K.A.; Jaiswal, K.S.; Wald, D.J.; Greene, M.; Comartin, Craig
2008-01-01
The U.S. Geological Survey’s Prompt Assessment of Global Earthquake’s Response (PAGER) Project and the Earthquake Engineering Research Institute’s World Housing Encyclopedia (WHE) are creating a global database of building stocks and their earthquake vulnerability. The WHE already represents a growing, community-developed public database of global housing and its detailed structural characteristics. It currently contains more than 135 reports on particular housing types in 40 countries. The WHE-PAGER effort extends the WHE in several ways: (1) by addressing non-residential construction; (2) by quantifying the prevalence of each building type in both rural and urban areas; (3) by addressing day and night occupancy patterns, (4) by adding quantitative vulnerability estimates from judgment or statistical observation; and (5) by analytically deriving alternative vulnerability estimates using in part laboratory testing.
Stein-Rule Estimation and Generalized Shrinkage Methods for Forecasting Using Many Predictors
Hillebrand, Eric Tobias; Lee, Tae-Hwy
We examine the Stein-rule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Stein-rule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased OLS estimator towards a restricted...... biased principal component (PC) estimator. Since the Stein-rule estimator combines the OLS and PC estimators, it is a model-averaging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree......-to-noise ratio is low, the PC estimator is superior. If the signal-to-noise ratio is high, the OLS estimator is superior. In out-of-sample forecasting with AR(1) predictors, the Stein-rule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence....
Compound Generalized Function Projective Synchronization for Fractional-Order Chaotic Systems
Chunde Yang
2016-01-01
Full Text Available A modified function projective synchronization for fractional-order chaotic system, called compound generalized function projective synchronization (CGFPS, is proposed theoretically in this paper. There are one scaling-drive system, more than one base-drive system, and one response system in the scheme of CGFPS, and the scaling function matrices come from multidrive systems. The proposed CGFPS technique is based on the stability theory of fractional-order system. Moreover, we achieve the CGFPS between three-driver chaotic systems, that is, the fractional-order Arneodo chaotic system, the fractional-order Chen chaotic system, and the fractional-order Lu chaotic system, and one response chaotic system, that is, the fractional-order Lorenz chaotic system. Numerical experiments are demonstrated to verify the effectiveness of the CGFPS scheme.
The Lick AGN Monitoring Project: Recalibrating Single-Epoch Virial Black Hole Mass Estimates
Park, Daeseong; Treu, Tommaso; Barth, Aaron J; Bentz, Misty C; Bennert, Vardha N; Canalizo, Gabriela; Filippenko, Alexei V; Gates, Elinor; Greene, Jenny E; Malkan, Matthew A; Walsh, Jonelle
2011-01-01
We investigate the calibration and uncertainties of black hole mass estimates based on the single-epoch (SE) method, using homogeneous and high-quality multi-epoch spectra obtained by the Lick Active Galactic Nucleus (AGN) Monitoring Project for 9 local Seyfert 1 galaxies with black hole masses < 10^8 M_sun. By decomposing the spectra into their AGN and stellar components, we study the variability of the single-epoch Hbeta line width (full width at half-maximum intensity, FWHM_Hbeta; or dispersion, sigma_Hbeta) and of the AGN continuum luminosity at 5100A (L_5100). From the distribution of the "virial products" (~ FWHM_Hbeta^2 L_5100^0.5 or sigma_Hbeta^2 L_5100^0.5) measured from SE spectra, we estimate the uncertainty due to the combined variability as ~ 0.05 dex (12%). This is subdominant with respect to the total uncertainty in SE mass estimates, which is dominated by uncertainties in the size-luminosity relation and virial coefficient, and is estimated to be ~ 0.46 dex (factor of ~ 3). By comparing the...
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
Critical analysis of the Hanford spent nuclear fuel project activity based cost estimate
Warren, R.N.
1998-09-29
In 1997, the SNFP developed a baseline change request (BCR) and submitted it to DOE-RL for approval. The schedule was formally evaluated to have a 19% probability of success [Williams, 1998]. In December 1997, DOE-RL Manager John Wagoner approved the BCR contingent upon a subsequent independent review of the new baseline. The SNFP took several actions during the first quarter of 1998 to prepare for the independent review. The project developed the Estimating Requirements and Implementation Guide [DESH, 1998] and trained cost account managers (CAMS) and other personnel involved in the estimating process in activity-based cost (ABC) estimating techniques. The SNFP then applied ABC estimating techniques to develop the basis for the December Baseline (DB) and documented that basis in Basis of Estimate (BOE) books. These BOEs were provided to DOE in April 1998. DOE commissioned Professional Analysis, Inc. (PAI) to perform a critical analysis (CA) of the DB. PAI`s review formally began on April 13. PAI performed the CA, provided three sets of findings to the SNFP contractor, and initiated reconciliation meetings. During the course of PAI`s review, DOE directed the SNFP to develop a new baseline with a higher probability of success. The contractor transmitted the new baseline, which is referred to as the High Probability Baseline (HPB), to DOE on April 15, 1998 [Williams, 1998]. The HPB was estimated to approach a 90% confidence level on the start of fuel movement [Williams, 1998]. This high probability resulted in an increased cost and a schedule extension. To implement the new baseline, the contractor initiated 26 BCRs with supporting BOES. PAI`s scope was revised on April 28 to add reviewing the HPB and the associated BCRs and BOES.
Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko [Waseda Univ., Tokyo (Japan). School of Science and Engineering; Toyama, Hinako; Ishii, Kenji; Senda, Michio
2001-05-01
The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or {sup 18}F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)
US Fish and Wildlife Service, Department of the Interior — This project will build on existing experience with statistical downscaling methods to derive comprehensive estimates of the future rainfall changes over the...
Junbiao Guan
2015-01-01
Full Text Available A new fractional-order chaotic system is addressed in this paper. By applying the continuous frequency distribution theory, the indirect Lyapunov stability of this system is investigated based on sliding mode control technique. The adaptive laws are designed to guarantee the stability of the system with the uncertainty and external disturbance. Moreover, the modified generalized projection synchronization (MGPS of the fractional-order chaotic systems is discussed based on the stability theory of fractional-order system, which may provide potential applications in secure communication. Finally, some numerical simulations are presented to show the effectiveness of the theoretical results.
Xu, Lei
2004-07-01
The nature of Bayesian Ying-Yang harmony learning is reexamined from an information theoretic perspective. Not only its ability for model selection and regularization is explained with new insights, but also discussions are made on its relations and differences from the studies of minimum description length (MDL), Bayesian approach, the bit-back based MDL, Akaike information criterion (AIC), maximum likelihood, information geometry, Helmholtz machines, and variational approximation. Moreover, a generalized projection geometry is introduced for further understanding such a new mechanism. Furthermore, new algorithms are also developed for implementing Gaussian factor analysis (FA) and non-Gaussian factor analysis (NFA) such that selecting appropriate factors is automatically made during parameter learning.
Generalized Projective Synchronization between Two Different Neural Networks with Mixed Time Delays
Xuefei Wu
2012-01-01
Full Text Available The generalized projective synchronization (GPS between two different neural networks with nonlinear coupling and mixed time delays is considered. Several kinds of nonlinear feedback controllers are designed to achieve GPS between two different such neural networks. Some results for GPS of these neural networks are proved theoretically by using the Lyapunov stability theory and the LaSalle invariance principle. Moreover, by comparison, we determine an optimal nonlinear controller from several ones and provide an adaptive update law for it. Computer simulations are provided to show the effectiveness and feasibility of the proposed methods.
Site Effects Estimation by a Transfer-Station Generalized Inversion Method
Zhang, Wenbo; Yu, Xiangwei
2016-04-01
Site effect is one of the essential factors in characterizing strong ground motion as well as in earthquake engineering design. In this study, the generalized inversion technique (GIT) is applied to estimate site effects. Moreover, the GIT is modified to improve its analytical ability.GIT needs a reference station as a standard. Ideally the reference station is located at a rock site, and its site effect is considered to be a constant. For the same earthquake, the record spectrum of an interested station is divided by that of the reference station, and the source term is eliminated. Thus site effects and the attenuation can be acquired. In the GIT process, the amount of earthquake data available in analysis is limited to that recorded by the reference station, and the stations of which site effects can be estimated are also restricted to those stations which recorded common events with the reference station. In order to improve the limitation of the GIT, a modified GIT is put forward in this study, namely, the transfer-station generalized inversion method (TSGI). Comparing with the GIT, this modified GIT can be used to enlarge data set and increase the number of stations whose site effects can be analyzed. And this makes solution much more stable. To verify the results of GIT, a non-reference method, the genetic algorithms (GA), is applied to estimate absolute site effects. On April 20, 2013, an earthquake with magnitude of MS 7.0 occurred in the Lushan region, China. After this event, more than several hundred aftershocks with ML<3.0 occurred in this region. The purpose of this paper is to investigate the site effects and Q factor for this area based on the aftershock strong motion records from the China National Strong Motion Observation Network System. Our results show that when the TSGI is applied instead of the GIT, the total number of events used in the inversion increases from 31 to 54 and the total number of stations whose site effect can be estimated
Q estimation of seismic data using the generalized S-transform
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
A bottom-up approach to estimating cost elements of REDD+ pilot projects in Tanzania
Merger Eduard
2012-08-01
Full Text Available Abstract Background Several previous global REDD+ cost studies have been conducted, demonstrating that payments for maintaining forest carbon stocks have significant potential to be a cost-effective mechanism for climate change mitigation. These studies have mostly followed highly aggregated top-down approaches without estimating the full range of REDD+ costs elements, thus underestimating the actual costs of REDD+. Based on three REDD+ pilot projects in Tanzania, representing an area of 327,825 ha, this study explicitly adopts a bottom-up approach to data assessment. By estimating opportunity, implementation, transaction and institutional costs of REDD+ we develop a practical and replicable methodological framework to consistently assess REDD+ cost elements. Results Based on historical land use change patterns, current region-specific economic conditions and carbon stocks, project-specific opportunity costs ranged between US$ -7.8 and 28.8 tCOxxxx for deforestation and forest degradation drivers such as agriculture, fuel wood production, unsustainable timber extraction and pasture expansion. The mean opportunity costs for the three projects ranged between US$ 10.1 – 12.5 tCO2. Implementation costs comprised between 89% and 95% of total project costs (excluding opportunity costs ranging between US$ 4.5 - 12.2 tCO2 for a period of 30 years. Transaction costs for measurement, reporting, verification (MRV, and other carbon market related compliance costs comprised a minor share, between US$ 0.21 - 1.46 tCO2. Similarly, the institutional costs comprised around 1% of total REDD+ costs in a range of US$ 0.06 – 0.11 tCO2. Conclusions The use of bottom-up approaches to estimate REDD+ economics by considering regional variations in economic conditions and carbon stocks has been shown to be an appropriate approach to provide policy and decision-makers robust economic information on REDD+. The assessment of opportunity costs is a crucial first step to
Estimating and projecting the effect of cold waves on mortality in 209 US cities.
Wang, Yan; Shi, Liuhua; Zanobetti, Antonella; Schwartz, Joel D
2016-09-01
The frequency, duration, and intensity of cold waves are expected to decrease in the near future under the changing climate. However, there is a lack of understanding on future mortality related to cold waves. The present study conducted a large-scale national projection to estimate future mortality attributable to cold waves during 1960-2050 in 209 US cities. Cold waves were defined as two, three, or at least four consecutive days with daily temperature lower than the 5th percentile of temperatures in each city. The lingering period of a cold wave was defined as the non-cold wave days within seven days following that cold wave period. First, with 168million residents in 209 US cities during 1962-2006, we fitted over-dispersed Poisson regressions to estimate the immediate and lingering effects of cold waves on mortality and tested if the associations were modified by the duration of cold waves, the intensity of cold waves, and mean winter temperature (MWT). Then we projected future mortality related to cold waves using 20 downscaled climate models. Here we show that the cold waves (both immediate and lingering) were associated with an increased but small risk of mortality. The associations varied substantially across climate regions. The risk increased with the duration and intensity of cold waves but decreased with MWT. The projected mortality related to cold waves would decrease from 1960 to 2050. Such a decrease, however, is small and may not be able to offset the potential increase in heat-related deaths if the adaptation to heat is not adequate.
E Kula
1984-01-01
In this article a model to estimate a discount factor matrix is derived for discount rates between 1% and 15% for the United Kingdom on the basis of a public-sector project evaluation method known as the sum of discounted consumption flows. These factors can readily be used by project analysts working on United Kingdom projects, especially those in which costs and benefits extend over many years.
Zhang, Yiwei; Xu, Zhiyuan; Shen, Xiaotong; Pan, Wei
2014-08-01
There is an increasing need to develop and apply powerful statistical tests to detect multiple traits-single locus associations, as arising from neuroimaging genetics and other studies. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI), in addition to genome-wide single nucleotide polymorphisms (SNPs), thousands of neuroimaging and neuropsychological phenotypes as intermediate phenotypes for Alzheimer's disease, have been collected. Although some classic methods like MANOVA and newly proposed methods may be applied, they have their own limitations. For example, MANOVA cannot be applied to binary and other discrete traits. In addition, the relationships among these methods are not well understood. Importantly, since these tests are not data adaptive, depending on the unknown association patterns among multiple traits and between multiple traits and a locus, these tests may or may not be powerful. In this paper we propose a class of data-adaptive weights and the corresponding weighted tests in the general framework of generalized estimation equations (GEE). A highly adaptive test is proposed to select the most powerful one from this class of the weighted tests so that it can maintain high power across a wide range of situations. Our proposed tests are applicable to various types of traits with or without covariates. Importantly, we also analytically show relationships among some existing and our proposed tests, indicating that many existing tests are special cases of our proposed tests. Extensive simulation studies were conducted to compare and contrast the power properties of various existing and our new methods. Finally, we applied the methods to an ADNI dataset to illustrate the performance of the methods. We conclude with the recommendation for the use of the GEE-based Score test and our proposed adaptive test for their high and complementary performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Carr, J.R. (Nevada Univ., Reno, NV (United States). Dept. of Geological Sciences); Mao, Nai-hsien (Lawrence Livermore National Lab., CA (United States))
1992-01-01
Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.
Poulsen, Per Rugaard; Cho, Byungchul; Keall, Paul
2010-01-01
. The mathematical formalism of the method includes an individualized measure of the position estimation error in terms of an estimated 1D Gaussian distribution for the unresolved target position[2]. The present study investigates how well this 1D Gaussian predicts the actual distribution of position estimation....... This finding indicates that individualized root-mean-square errors and 95% confidence intervals can be applied reliably to the estimated target trajectories....
Inflation in general covariant Ho\\v{r}ava-Lifshitz gravity without projectability
Zhu, Tao; Wang, Anzhong
2012-01-01
In this paper, we study inflation in the general covariant Ho\\v{r}ava-Lifshitz gravity without the projectability condition. We write down explicitly the equations of the linear scalar perturbations of the FRW universe for a single scalar field without specifying to any gauge. Applying these equations to a particular gauge, we are able to obtain a master equation of the perturbations, in contrast to all the other versions of the theory without the projectability condition. This is because in the current version of the theory it has the same degree of freedom of general relativity. To calculate the power spectrum and index, we first define the initial conditions as the ones that minimize the energy of the ground state. Then, we obtain the full solutions of the equation of motion, by using the WKB approximations. From these solutions, we calculate the power spectrum and spectrum index of the comoving curvature perturbations and find the corrections due to the high order spatial derivative terms of the theory to...
Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.
2013-12-01
In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example
Skafte, Anders; Aenlle, Manuel L.; Brincker, Rune
2016-02-01
Measurement systems are being installed in more and more civil structures with the purpose of monitoring the general dynamic behavior of the structure. The instrumentation is typically done with accelerometers, where experimental frequencies and mode shapes can be identified using modal analysis and used in health monitoring algorithms. But the use of accelerometers is not suitable for all structures. Structures like wind turbine blades and wings on airplanes can be exposed to lightning, which can cause the measurement systems to fail. Structures like these are often equipped with fiber sensors measuring the in-plane deformation. This paper proposes a method in which the displacement mode shapes and responses can be predicted using only strain measurements. The method relies on the newly discovered principle of local correspondence, which states that each experimental mode can be expressed as a unique subset of finite element modes. In this paper the technique is further developed to predict the mode shapes in different states of the structure. Once an estimate of the modes is found, responses can be predicted using the superposition of the modal coordinates weighted by the mode shapes. The method is validated with experimental tests on a scaled model of a two-span bridge installed with strain gauges. Random load was applied to simulate a civil structure under operating condition, and strain mode shapes were identified using operational modal analysis.
An Improved Heat Budget Estimation Including Bottom Effects for General Ocean Circulation Models
Carder, Kendall; Warrior, Hari; Otis, Daniel; Chen, R. F.
2001-01-01
This paper studies the effects of the underwater light field on heat-budget calculations of general ocean circulation models for shallow waters. The presence of a bottom significantly alters the estimated heat budget in shallow waters, which affects the corresponding thermal stratification and hence modifies the circulation. Based on the data collected during the COBOP field experiment near the Bahamas, we have used a one-dimensional turbulence closure model to show the influence of the bottom reflection and absorption on the sea surface temperature field. The water depth has an almost one-to-one correlation with the temperature rise. Effects of varying the bottom albedo by replacing the sea grass bed with a coral sand bottom, also has an appreciable effect on the heat budget of the shallow regions. We believe that the differences in the heat budget for the shallow areas will have an influence on the local circulation processes and especially on the evaporative and long-wave heat losses for these areas. The ultimate effects on humidity and cloudiness of the region are expected to be significant as well.
2008-01-01
In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.
ZHANG SanGuo; LIAO Yuan
2008-01-01
In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.
Bracegirdle, Thomas J. [British Antarctic Survey, Cambridge (United Kingdom); Stephenson, David B. [University of Exeter, Mathematics Research Institute, Exeter (United Kingdom); NCAS-Climate, Reading (United Kingdom)
2012-12-15
This study presents projections of twenty-first century wintertime surface temperature changes over the high-latitude regions based on the third Coupled Model Inter-comparison Project (CMIP3) multi-model ensemble. The state-dependence of the climate change response on the present day mean state is captured using a simple yet robust ensemble linear regression model. The ensemble regression approach gives different and more precise estimated mean responses compared to the ensemble mean approach. Over the Arctic in January, ensemble regression gives less warming than the ensemble mean along the boundary between sea ice and open ocean (sea ice edge). Most notably, the results show 3 C less warming over the Barents Sea ({proportional_to} 7 C compared to {proportional_to} 10 C). In addition, the ensemble regression method gives projections that are 30 % more precise over the Sea of Okhostk, Bering Sea and Labrador Sea. For the Antarctic in winter (July) the ensemble regression method gives 2 C more warming over the Southern Ocean close to the Greenwich Meridian ({proportional_to} 7 C compared to {proportional_to} 5 C). Projection uncertainty was almost half that of the ensemble mean uncertainty over the Southern Ocean between 30 W to 90 E and 30 % less over the northern Antarctic Peninsula. The ensemble regression model avoids the need for explicit ad hoc weighting of models and exploits the whole ensemble to objectively identify overly influential outlier models. Bootstrap resampling shows that maximum precision over the Southern Ocean can be obtained with ensembles having as few as only six climate models. (orig.)
Estimating Value at Risk with the Generalized Kalman Filter%基于Generalized Kalman Filter的VaR估计
赵利锋; 张崇岐
2009-01-01
在应用Kalman Filter方法估计时变风险β系数的基础上,引入Generalized Kalman Filter方法来估计时变卢系数,再通过Sharp对角线模型计算投资组合的VaR,并运用Backtesting检验判断两方法估计VaR的精确度.
Ackchai Sirikijpanichkul
2015-01-01
Full Text Available For the agricultural-based countries, the requirement on transportation infrastructure should not only be limited to accommodate general traffic but also the transportation of crop and agricultural products during the harvest seasons. Most of the past researches focus on the development of truck trip estimation techniques for urban, statewide, or nationwide freight movement but neglect the importance of rural freight movement which contributes to pavement deterioration on rural roads especially during harvest seasons. Recently, the Thai Government initiated a plan to construct a network of reservoirs within the northeastern region, aiming at improving existing irrigation system particularly in the areas where a more effective irrigation system is needed. It is expected to bring in new opportunities on expanding the cultivation areas, increasing the economy of scale and enlarging the extent market of area. As a consequence, its effects on truck trip generation needed to be investigated to assure the service quality of related transportation infrastructure. This paper proposes a combinatory input-output commodity-based approach to estimate truck trips on rural highway infrastructure network. The large-scale irrigation project for the northeastern of Thailand is demonstrated as a case study.
Orlov A. I.
2015-05-01
Full Text Available Estimates of the errors of the characteristics of financial flows of investment projects are needed to make adequate management decisions, particularly in the rocket and the space industry. Organizational-economic approaches to the estimations of the feasibility of innovation-investment projects to create rocket and space technologies require intensive use of numerical characteristics of the financial flows of long-term projects of this type. In organizational-economic support for control problems in the aerospace industry we must provide the need to obtain the estimates of the errors of the characteristics of financial flows. Such estimates are an integral part of the organizational-economic support of innovation activity in the aerospace industry. They can be compared with the predictions interval, i.e. confidence estimation of predictive values. Half the length of the confidence interval is the prediction error estimate. In this article we give the new method for estimating the errors of the main characteristics of the investment projects. We focus on the net present value called NPV. Our method of estimation of errors is based on the results of statistics interval data, which is an integral part of the system fuzzy interval mathematics. We construct asymptotic theory which corresponds to small deviations of discount coefficients. The error of NPV has been found as the asymptotic notna. With up to infinitesimals of higher orders the error of NPV is a linear function of the maximum possible error of discount coefficients
Estimation of seismically detectable portion of a gas plume: CO2CRC Otway project case study
Pevzner, Roman; Caspari, Eva; Bona, Andrej; Galvin, Robert; Gurevich, Boris
2013-04-01
CO2CRC Otway project comprises of several experiments involving CO2/CH4 or pure CO2 gas injection into different geological formations at the Otway test site (Victoria, Australia). During the first stage of the project, which was finished in 2010, more than 64,000 t of gas were injected into the depleted gas reservoir at ~2 km depth. At the moment, preparations for the next stage of the project aiming to examine capabilities of seismic monitoring of small scale injection (up to 15,000 t) into saline formation are ongoing. Time-lapse seismic is one of the most typical methods for CO2 geosequestration monitoring. Significant experience was gained during the first stage of the project through acquisition and analysis of the 4D surface seismic and numerous time-lapse VSP surveys. In order to justify the second stage of the project and optimise parameters of the experiment, several modelling studies were conducted. In order to predict seismic signal we populate realistic geological model with elastic properties, model their changes using fluid substitution technique applied to the fluid flow simulation results and compute synthetic seismic baseline and monitor volumes. To assess detectability of the time-lapse signal caused by the injection, we assume that the time-lapse noise level will be equivalent to the level of difference between the last two Otway 3D surveys acquired in 2009 and 2010 using conventional surface technique (15,000 lbs vibroseis sources and single geophones as the receivers). In order to quantify the uncertainties in plume imaging/visualisation due to the time-lapse noise realisation we propose to use multiple noise realisations with the same F-Kx-Ky amplitude spectra as the field noise for each synthetic signal volume. Having signal detection criterion defined in the terms of signal/time- lapse noise level on a single trace we estimate visible portion of the plume as a function of this criterion. This approach also gives an opportunity to attempt to
Ikenberry, T. A.; Burnett, R. A.; Napier, B. A.; Reitz, N. A.; Shipler, D. B.
1992-02-01
Preliminary radiation doses were estimated and reported during Phase I of the Hanford Environmental Dose Reconstruction (HEDR) Project. As the project has progressed, additional information regarding the magnitude and timing of past radioactive releases has been developed, and the general scope of the required calculations has been enhanced. The overall HEDR computational model for computing doses attributable to atmospheric releases from Hanford Site operations is called HEDRIC (Hanford Environmental Dose Reconstruction Integrated Codes). It consists of four interrelated models: source term, atmospheric transport, environmental accumulation, and individual dose. The source term and atmospheric transport models are documented elsewhere. This report describes the initial implementation of the design specifications for the environmental accumulation model and computer code, called DESCARTES (Dynamic EStimates of Concentrations and Accumulated Radionuclides in Terrestrial Environments), and the individual dose model and computer code, called CIDER (Calculation of Individual Doses from Environmental Radionuclides). The computations required of these models and the design specifications for their codes were documented in Napier et al. (1992). Revisions to the original specifications and the basis for modeling decisions are explained. This report is not the final code documentation but gives the status of the model and code development to date. Final code documentation is scheduled to be completed in FY 1994 following additional code upgrades and refinements. The user's guide included in this report describes the operation of the environmental accumulation and individual dose codes and associated pre- and post-processor programs. A programmer's guide describes the logical structure of the programs and their input and output files.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2003-10-29
Quantitative analysis of uptake and washout of cardiac single photon emission computed tomography (SPECT) radiopharmaceuticals has the potential to provide better contrast between healthy and diseased tissue, compared to conventional reconstruction of static images. Previously, we used B-splines to model time-activity curves (TACs) for segmented volumes of interest and developed fast least-squares algorithms to estimate spline TAC coefficients and their statistical uncertainties directly from dynamic SPECT projection data. This previous work incorporated physical effects of attenuation and depth-dependent collimator response. In the present work, we incorporate scatter and use a computer simulation to study how scatter modeling affects directly estimated TACs and subsequent estimates of compartmental model parameters. An idealized single-slice emission phantom was used to simulate a 15 min dynamic {sup 99m}Tc-teboroxime cardiac patient study in which 500,000 events containing scatter were detected from the slice. When scatter was modeled, unweighted least-squares estimates of TACs had root mean square (RMS) error that was less than 0.6% for normal left ventricular myocardium, blood pool, liver, and background tissue volumes and averaged 3% for two small myocardial defects. When scatter was not modeled, RMS error increased to average values of 16% for the four larger volumes and 35% for the small defects. Noise-to-signal ratios (NSRs) for TACs ranged between 1-18% for the larger volumes and averaged 110% for the small defects when scatter was modeled. When scatter was not modeled, NSR improved by average factors of 1.04 for the larger volumes and 1.25 for the small defects, as a result of the better-posed (though more biased) inverse problem. Weighted least-squares estimates of TACs had slightly better NSR and worse RMS error, compared to unweighted least-squares estimates. Compartmental model uptake and washout parameter estimates obtained from the TACs were less
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
Navarro-Mateu, Fernando; Tormo, MJ; Vilagut, G; Alonso, J; Ruíz-Merino, G; Escámez, T; Salmerón, D; Júdez, J; Martínez, S; Navarro, C
2013-01-01
Background Multidisciplinary collaboration between clinicians, epidemiologists, neurogeneticists and statisticians on research projects has been encouraged to improve our knowledge of the complex mechanisms underlying the aetiology and burden of mental disorders. The PEGASUS-Murcia (Psychiatric Enquiry to General Population in Southeast Spain-Murcia) project was designed to assess the prevalence of common mental disorders and to identify the risk and protective factors, and it also included the collection of biological samples to study the gene–environmental interactions in the context of the World Mental Health Survey Initiative. Methods and analysis The PEGASUS-Murcia project is a new cross-sectional face-to-face interview survey based on a representative sample of non-institutionalised adults in the Region of Murcia (Mediterranean Southeast, Spain). Trained lay interviewers used the latest version of the computer-assisted personal interview of the Composite International Diagnostic Interview (CIDI 3.0) for use in Spain, specifically adapted for the project. Two biological samples of buccal mucosal epithelium will be collected from each interviewed participant, one for DNA extraction for genomic and epigenomic analyses and the other to obtain mRNA for gene expression quantification. Several quality control procedures will be implemented to assure the highest reliability and validity of the data. This article describes the rationale, sampling methods and questionnaire content as well as the laboratory methodology. Ethics and dissemination Informed consent will be obtained from all participants and a Regional Ethics Research Committee has approved the protocol. Results will be disseminated in peer-reviewed publications and presented at the national and the international conferences. Discussion Cross-sectional studies, which combine detailed personal information with biological data, offer new and exciting opportunities to study the gene
Kim, Ho Sung
2013-12-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.
Wu Chi-Yeh
2010-01-01
Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G
Diego Rivera; Yessica Rivas; Alex Godoy
2015-02-01
Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Jonker, W R
2014-06-29
As part of the 5th National Audit Project of the Royal College of Anaesthetists and the Association of Anaesthetists of Great Britain and Ireland concerning accidental awareness during general anaesthesia, we issued a questionnaire to every consultant anaesthetist in each of 46 public hospitals in Ireland, represented by 41 local co-ordinators. The survey ascertained the number of new cases of accidental awareness becoming known to them for patients under their care or supervision for a calendar year, as well as their career experience. Consultants from all hospitals responded, with an individual response rate of 87% (299 anaesthetists). There were eight new cases of accidental awareness that became known to consultants in 2011; an estimated incidence of 1:23 366. Two out of the eight cases (25%) occurred at or after induction of anaesthesia, but before surgery; four cases (50%) occurred during surgery; and two cases (25%) occurred after surgery was complete, but before full emergence. Four cases were associated with pain or distress (50%), one after an experience at induction and three after experiences during surgery. There were no formal complaints or legal actions that arose in 2011 related to awareness. Depth of anaesthesia monitoring was reported to be available in 33 (80%) departments, and was used by 184 consultants (62%), 18 (6%) routinely. None of the 46 hospitals had a policy to prevent or manage awareness. Similar to the results of a larger survey in the UK, the disparity between the incidence of awareness as known to anaesthetists and that reported in trials warrants explanation. Compared with UK practice, there appears to be greater use of depth of anaesthesia monitoring in Ireland, although this is still infrequent.
General technique for analytical derivatives of post-projected Hartree-Fock
Tsuchimochi, Takashi; Ten-no, Seiichiro
2017-02-01
In electronic structure theory, the availability of an analytical derivative is one of the desired features for a method to be useful in practical applications, as it allows for geometry optimization as well as computation of molecular properties. With the recent advances in the development of symmetry-projected Hartree-Fock (PHF) methods, we here aim at further extensions by devising the analytic gradients of post-PHF approaches with a special focus on spin-extended (spin-projected) configuration interaction with single and double substitutions (ECISD). Just like standard single-reference methods, the mean-field PHF part does not require the corresponding coupled-perturbed equation to be solved, while the correlation energy term needs the orbital relaxation effect to be accounted for, unless the underlying molecular orbitals are variationally optimized in the presence of the correlation energy. We present a general strategy for post-PHF analytical gradients, which closely parallels that for single-reference methods, yet addressing the major difference between them. The similarity between ECISD and multi-reference CI not only in the energy but also in the optimized geometry is clearly demonstrated by the numerical examples of ozone and cyclobutadiene.
General Electric Company Hanford Works, Project C-431-A Production Facility-Section A, design report
Colburn, R.T.
1951-03-29
The 100-C project is to be located adjacent to the present 100-B Area. It is planned to build an addition to the present 181-B river pump house using the same elevations for pump settings, intakes, and floors as for the present pump house, thus maintaining the same suction conditions and flood protection as B Area. The 105 Building will be located on higher ground than B Area and therefore, protection against possible flood damage is assured. This report is divided into the following sections: (1) general description of project; (2) addition to existing river pump house; (3) raw water lines from 181-B addition to 183-C lead house; (4) the 183-C filter plant; (5) 190-C process pump house; (6) power house addition; (7) high tanks; (8) retention basins; (9) outside streamlines; (10) primary substation; (11) outside underground lines; (12) outside electric lines; (13) roads, railroads, walks, fences; (14) structural design of all buildings; and (15) architectural design of all buildings.
Implicit Knowledge of General Upper Secondary School in a Bridge-building Project
Rasmussen, Annette; Andreasen, Karen Egedal
2016-01-01
Bridge-building activities are practiced widely in the education systems of Europe. They are meant to bridge transitions between lower and upper secondary school and form a mandatory part of the youth guidance system in Denmark. By giving pupils the opportunity to experience the different educati...... of their prior knowledge. The analysis is theoretically informed by especially the code concepts of Basil Bernstein.......Bridge-building activities are practiced widely in the education systems of Europe. They are meant to bridge transitions between lower and upper secondary school and form a mandatory part of the youth guidance system in Denmark. By giving pupils the opportunity to experience the different...... secondary education can be questioned. In this ethnographic case study of a bridge-building project in a rural area in Denmark, we analyse the implicit knowledge of the general upper secondary school, as it is practiced in a bridge-building project, and how it is experienced by the pupils on the background...
AUPHEP—Austrian Project on Health Effects of Particulates—general overview
Hauck, H.; Berner, A.; Frischer, T.; Gomiscek, B.; Kundi, M.; Neuberger, M.; Puxbaum, H.; Preining, O.; Auphep-Team
AUPHEP was started in 1999 as a 5 years program to investigate the situation of the atmospheric aerosol with respect to effects on human health. At four different sites in Austria (3 urban and one rural site) an extended monitoring program was conducted for PM 1, PM 2.5 and PM 10 as well as particle number concentration for 12 months each. Beside continuous measurements using TEOM and beta attenuation high-volume sampling of PM 2.5 and PM 10 provided samples for chemical analyses of various ions, heavy metals and organic compounds. Furthermore, carbonaceous material (TC, EC, OC) year round and PAHs on selected days were analyzed. From collocated public monitoring stations also pollutant gases (SO 2, NO, NO 2, O 3, CO) and meteorological components are available. In winter and summer campaigns aerosol size spectra including chemical components were measured for at least one week each. All data are collected in a project data base (CD-ROM). While extensive data analysis will be presented in following papers, some general results are presented within this paper: annual averages for PM 1 are between 10 and 20 μg m -3, for PM 2.5 between 15 and 26 mg m -3 and for PM 10 between 20 and 38 μg m -3. Number concentrations are between 10,000 and 30,000 cm -3. Urban concentrations are usually higher in winter, rural concentrations in summer. PM 2.5 is in average around 70% of PM 10, for PM 1 this fraction is about 57%. Several studies on health effects are included in this project: a cross-sectional study on preschool and school children regarding lung function measurements and questionnaires about respiratory impairment in the surrounding area of the monitoring sites as well as time series studies on mortality and respiratory morbidity on the general population.
APhoRISM FP7 project: the Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure
Merucci, Luca; Corradini, Stefano; Bignami, Christian; Stramondo, Salvatore
2014-05-01
APHORISM is an FP7 project that aims to develop innovative products to support the management and mitigation of the volcanic and the seismic crisis. Satellite and ground measurements will be managed in a novel manner to provide new and improved products in terms of accuracy and quality of information. The Multi-platform volcanic Ash Cloud Estimation (MACE) infrastructure will exploit the complementarity between geostationary, and polar satellite sensors and ground measurements to improve the ash detection and retrieval and to fully characterize the volcanic ash clouds from source to the atmosphere. The basic idea behind the proposed method consists to manage in a novel manner, the volcanic ash retrievals at the space-time scale of typical geostationary observations using both the polar satellite estimations and in-situ measurements. The typical ash thermal infrared (TIR) retrieval will be integrated by using a wider spectral range from visible (VIS) to microwave (MW) and the ash detection will be extended also in case of cloudy atmosphere or steam plumes. All the MACE ash products will be tested on three recent eruptions representative of different eruption styles in different clear or cloudy atmospheric conditions: Eyjafjallajokull (Iceland) 2010, Grimsvotn (Iceland) 2011 and Etna (Italy) 2011-2012. The MACE infrastructure will be suitable to be implemented in the next generation of ESA Sentinels satellite missions.
Ju, Lili; Tian, Li; Wang, Desheng
2008-10-31
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
A NOTE ON THE EFFECT OF PROJECTIONS ON BOTH MEASURES AND THE GENERALIZATION OF q-DIMENSION CAPACITY
Bilel Selmi
2016-12-01
Full Text Available In this paper, we are concerned both with the properties of the generalization of the L^q-spectrum relatively to two Borel probability measures and with the generalized q-dimension Riesz capacity. We are also interested in the study of their behaviors under orthogonal projections.
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)
2015-12-15
The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)
Linear and nonlinear associations between general intelligence and personality in Project TALENT.
Major, Jason T; Johnson, Wendy; Deary, Ian J
2014-04-01
Research on the relations of personality traits to intelligence has primarily been concerned with linear associations. Yet, there are no a priori reasons why linear relations should be expected over nonlinear ones, which represent a much larger set of all possible associations. Using 2 techniques, quadratic and generalized additive models, we tested for linear and nonlinear associations of general intelligence (g) with 10 personality scales from Project TALENT (PT), a nationally representative sample of approximately 400,000 American high school students from 1960, divided into 4 grade samples (Flanagan et al., 1962). We departed from previous studies, including one with PT (Reeve, Meyer, & Bonaccio, 2006), by modeling latent quadratic effects directly, controlling the influence of the common factor in the personality scales, and assuming a direction of effect from g to personality. On the basis of the literature, we made 17 directional hypotheses for the linear and quadratic associations. Of these, 53% were supported in all 4 male grades and 58% in all 4 female grades. Quadratic associations explained substantive variance above and beyond linear effects (mean R² between 1.8% and 3.6%) for Sociability, Maturity, Vigor, and Leadership in males and Sociability, Maturity, and Tidiness in females; linear associations were predominant for other traits. We discuss how suited current theories of the personality-intelligence interface are to explain these associations, and how research on intellectually gifted samples may provide a unique way of understanding them. We conclude that nonlinear models can provide incremental detail regarding personality and intelligence associations.
Zhang, H; Kong, V; Jin, J [Georgia Regents University Cancer Center, Augusta, GA (Georgia); Ren, L; Zhang, Y; Giles, W [Duke University Medical Center, Durham, NC (United States)
2015-06-15
Purpose: A synchronized moving grid (SMOG) has been proposed to reduce scatter and lag artifacts in cone beam computed tomography (CBCT). However, information is missing in each projection because certain areas are blocked by the grid. A previous solution to this issue is acquiring 2 complimentary projections at each position, which increases scanning time. This study reports our first Result using an inter-projection sensor fusion (IPSF) method to estimate missing projection in our prototype SMOG-based CBCT system. Methods: An in-house SMOG assembling with a 1:1 grid of 3 mm gap has been installed in a CBCT benchtop. The grid moves back and forth in a 3-mm amplitude and up-to 20-Hz frequency. A control program in LabView synchronizes the grid motion with the platform rotation and x-ray firing so that the grid patterns for any two neighboring projections are complimentary. A Catphan was scanned with 360 projections. After scatter correction, the IPSF algorithm was applied to estimate missing signal for each projection using the information from the 2 neighboring projections. Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct CBCT images. The CBCTs were compared to those reconstructed using normal projections without applying the SMOG system. Results: The SMOG-IPSF method may reduce image dose by half due to the blocked radiation by the grid. The method almost completely removed scatter related artifacts, such as the cupping artifacts. The evaluation of line pair patterns in the CatPhan suggested that the spatial resolution degradation was minimal. Conclusion: The SMOG-IPSF is promising in reducing scatter artifacts and improving image quality while reducing radiation dose.
Strong Consistency of Generalized Estimation Equation Root%广义估计方程根的强相合性
王健发; 陈淑兰; 闫莉
2011-01-01
The paper studies strong consistency of generalized estimation equation root.Under the generalized estimating equation having correct assumption,when the number of subjects goes to infinity and the residuals form a martingale difference sequence,the paper uses martingale Benstein Inequality to prove strong consistency of the generalized estimation equation root.At the same time when the number of subjects is one,and the number of observations on the subject goes to infinity,and the residuals form a martingale difference sequence,the paper proves strong consistency of generalized estimation equation root.%在广义估计方程有正确假定,当个体数目趋于无穷而残差为鞅差序列时,用鞅伯恩斯坦不等式证明广义估计方程根的强相合性,同时在个体数目为1而重复观测次数趋于无穷且残差为鞅差序列时,也证明了广义估计方程根的强相合性.
Generalized Consistency for Kernel Density Estimation%密度核估计的广义相合性
王敏; 李开灿
2015-01-01
研究独立样本下密度核估计的相合性. 在 Peason-χ2距离和Kullback-Leibler距离意义下,提出密度核估计广义相合性的概念,并获得密度核估计的各种广义相合性.%In this paper, we discuss the consistency of the density kernel estimation under the independent sample. We give the definitions of generalized consistency for kernel density estimation and obtain several kinds of generalized consistency of kernel density estimation under thePeason-χ2distance and the Kullback-Leibler distance.
Greeley, Jeffrey Philip; Nørskov, Jens Kehlet
2005-01-01
A simple scheme for the estimation of oxygen binding energies on transition metal surface alloys is presented. It is shown that a d-band center model of the alloy surfaces is a convenient and appropriate basis for this scheme; variations in chemical composition, strain effects, and ligand effects...... for the estimation of oxygen binding energies on a wide variety of transition metal alloys. (c) 2005 Elsevier B.V. All rights reserved....
Estimates of Lp Modulus of Continuity of Generalized Bounded Variation Classes
Heping Wang
2014-01-01
Full Text Available Some sharp estimates of the Lp1≤p<∞ modulus of continuity of classes of Λφ-bounded variation are obtained. As direct applications, we obtain estimates of order of Fourier coefficients of functions of Λφ-bounded variation, and we also characterize some sufficient and necessary conditions for the embedding relations Hpω⊂ΛφBV. Our results include the corresponding known results of the class ΛBV as a special case.
Gómez-Alba, Sebastián; Fajardo-Zarate, Carlos Eduardo; Vargas, Carlos Alberto
2016-11-01
At least 156 earthquakes (Mw 2.8-4.4) were detected in Puerto Gaitán, Colombia (Eastern Llanos Basin) between April 2013 and December 2014. Out of context, this figure is not surprising. However, from its inception in 1993, the Colombian National Seismological Network (CNSN) found no evidence of significant seismic events in this region. In this study, we used CNSN data to model the rupture front and orientation of the highest-energy events. For these earthquakes, we relied on a joint inversion method to estimate focal mechanisms and, in turn, determine the area's fault trends and stress tensor. While the stress tensor defines maximum stress with normal tendency, focal mechanisms generally represent normal faults with NW orientation, an orientation which lines up with the tracking rupture achieved via Back Projection Imaging for the study area. We ought to bear in mind that this anomalous earthquake activity has taken place within oil fields. In short, the present paper argues that, based on the spatiotemporal distribution of seismic events, hydrocarbon operations may induce the study area's seismicity.
CHEN Yong; LI Biao
2004-01-01
Applying the generalized method, which is a direct and unified algebraic method for constructing multiple travelling wave solutions of nonlinear partial differential equations (PDEs), and implementing in a computer algebraic system, we consider the generalized Zakharov Kuzentsov equation with nonlinear terms of any order. As a result, we can not only successfully recover the previously known travelling wave solutions found by existing various tanh methods and other sophisticated methods, but also obtain some newformal solutions. The solutions obtained include kink-shaped solitons, bell-shaped solitons, singular solitons, and periodic solutions.
He, Wu
2014-01-01
Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…
Strong consistency of maximum quasi-likelihood estimates in generalized linear models
YiN; Changming; ZHAO; Lincheng
2005-01-01
In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni＝1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.
Li, Mao-Fen; Fan, Li; Liu, Hong-Bin; Guo, Peng-Tao; Wu, Wei
2013-01-01
Estimation of daily global solar radiation (Rs) from routinely measured temperature data has been widely developed and used in many different areas of the world. However, many of them are site specific. It is assumed that a general model for estimating daily Rs using temperature variables and geographical parameters could be achieved within a climatic region. This paper made an attempt to develop a general model to estimate daily Rs using routinely measured temperature data (maximum (Tmax, °C) and minimum (Tmin, °C) temperatures) and site geographical parameters (latitude (La, °N), longitude (Ld, °E) and altitude (Alt, m)) for Guizhou and Sichuan basin of southwest China, which was classified into the hot summer and cold winter climate zone. Comparison analysis was carried out through statistics indicators such as root mean squared error of percentage (RMSE%), modeling efficiency (ME), coefficient of residual mass (CRM) and mean bias error (MBE). Site-dependent daily Rs estimating models were calibrated and validated using long-term observed weather data. A general formula was then obtained from site geographical parameters and the better fit site-dependent models with mean RMSE% of 38.68%, mean MBE of 0.381 MJ m-2 d-1, mean CRM of 0.04 and mean ME value of 0.713.
National Aeronautics and Space Administration — Recent work has developed a number of architectures and algorithms for accurately estimating spacecraft and formation states. The estimation accuracy achievable...
Abejuela, Harmony Raylen; Osser, David N
2016-01-01
This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.
S. Vaidyanathan
2014-11-01
Full Text Available This research work proposes a five-term 3-D novel conservative chaotic system with a quadratic nonlinearity and a quartic nonlinearity. The conservative chaotic systems have the important property that they are volume conserving. The Lyapunov exponents of the 3-D novel chaotic system are obtained as �! = 0.0836, �! = 0 and �! = −0.0836. Since the sum of the Lyapunov exponents is zero, the 3-D novel chaotic system is conservative. Thus, the Kaplan-Yorke dimension of the 3-D novel chaotic system is easily seen as 3.0000. The phase portraits of the novel chaotic system simulated using MATLAB depict the chaotic attractor of the novel system. This research work also discusses other qualitative properties of the system. Next, an adaptive controller is designed to achieve Generalized Projective Synchronization (GPS of two identical novel chaotic systems with unknown system parameters. MATLAB simulations are shown to validate and demonstrate the GPS results derived in this work.
A method of generalized projections (MGP) ghost correction algorithm for interleaved EPI.
Lee, K J; Papadakis, N G; Barber, D C; Wilkinson, I D; Griffiths, P D; Paley, M N J
2004-07-01
Investigations into the method of generalized projections (MGP) as a ghost correction method for interleaved EPI are described. The technique is image-based and does not require additional reference scans. The algorithm was found to be more effective if a priori knowledge was incorporated to reduce the degrees of freedom, by modeling the ghosting as arising from a small number of phase offsets. In simulations with phase variation between consecutive shots for n-interleaved echo planar imaging (EPI), ghost reduction was achieved for n = 2 only. With no phase variation between shots, ghost reduction was obtained with n up to 16. Incorporating a relaxation parameter was found to improve convergence. Dependence of convergence on the region of support was also investigated. A fully automatic version of the method was developed, using results from the simulations. When tested on in vivo 2-, 16-, and 32-interleaved spin-echo EPI data, the method achieved deghosting and image restoration close to that obtained by both reference scan and odd/even filter correction, although some residual artifacts remained.
M El Hamma; R Daher
2014-05-01
Using a generalized spherical mean operator, we define generalized modulus of smoothness in the space $L^2_k(\\mathbb{R}^d)$. Based on the Dunkl operator we define Sobolev-type space and -functionals. The main result of the paper is the proof of the equivalence theorem for a -functional and a modulus of smoothness for the Dunkl transform on $\\mathbb{R}^d$.
Theesar, S Jeeva Sathya; Balasubramaniam, P; Banerjee, Santo
2012-09-01
In Chaos 19, 013102 (2009), the author proposed generalized projective synchronization for time delay systems using nonlinear observer and obtained sufficient condition to ensure projective synchronization for modulated time varying delay. There are concerns with the obtained conditions as the result was applicable only to trivial case of time varying delay τ[over dot](1)(t)=dτ(1)(t)/dt<1. In this paper, we note the drawbacks of the proposed sufficient condition. The new improved sufficient condition for ensuring the projective synchronization of time varying delayed systems is presented. The proposed new criteria have been verified by adopting the Ikeda system.
Buffalano, C.; Fogleman, S.; Gielecki, M.
1976-01-01
A methodology is outlined which can be used to estimate the costs of research and development projects. The approach uses the Delphi technique a method developed by the Rand Corporation for systematically eliciting and evaluating group judgments in an objective manner. The use of the Delphi allows for the integration of expert opinion into the cost-estimating process in a consistent and rigorous fashion. This approach can also signal potential cost-problem areas. This result can be a useful tool in planning additional cost analysis or in estimating contingency funds. A Monte Carlo approach is also examined.
Self Estimates of General, Crystallized, and Fluid Intelligences in an Ethnically Diverse Population
Kaufman, James C.
2012-01-01
Self-estimated intelligence is a quick way to assess people's conceptions of their own abilities. Furnham (2001) and colleagues have used this technique to make comparisons across culture and gender and different approaches to intelligence (such as "g" or Multiple Intelligences). This study seeks to build on past work in two ways. First, a large,…
Dutt, Pravir; Tomar, Satyendra
2003-01-01
In this paper we show that the h-p spectral element method developed in [3,8,9] applies to elliptic problems in curvilinear polygons with mixed Neumann and Dirichlet boundary conditions provided that the Babuska-Brezzi inf-sup conditions are satisfied. We establish basic stability estimates for a no
2013-03-01
Mendenhall , and Sheaffer [25]. For the remainder of this paper, however, we will make use of the Wilcoxon rank sum test for purposes of comparison with the...B. W. Silverman, Density Estimation for Statistics and Data Analysis, Chapman & Hall/CRC, 1986, p. 48. [25] D. D. Wackerly, W. Mendenhall III and R
Quesada, Luis
2012-01-01
3D motion tracking is a critical task in many computer vision applications. Unsupervised markerless 3D motion tracking systems determine the most relevant object in the screen and then track it by continuously estimating its projection features (center and area) from the edge image and a point inside the relevant object projection (namely, inner point), until the tracking fails. Existing reliable object projection feature estimation techniques are based on ray-casting or grid-filling from the inner point. These techniques assume the edge image to be accurate. However, in real case scenarios, edge miscalculations may arise from low contrast between the target object and its surroundings or motion blur caused by low frame rates or fast moving target objects. In this paper, we propose a barrier extension to casting-based techniques that mitigates the effect of edge miscalculations.
Li, Rui; Landex, Alex; Salling, Kim Bang
The purpose of the paper is to estimate railway project construction cost based on the nominal market price. Currently, estimation of project costs within railway infrastructure procurement is particularly challenging due to; 1) construction costs highly depend on possession timeframes and duration...... and 2) railway construction work costs are not transparent in the market. This paper suggests separating the costs into 3 sub-categories: materials, labour and machinery. Evidently, the materials are further broke-down into subcomponents which then remains fixed whereas the cost of labour and machinery...... construction project, the new line to the fixed link across Fehmarn Belt, is introduced where it is shown that the non-material cost is about 19% of the total expenditure. By assuming three sets of track blocking scenarios with the same amount of construction works it is proven that given an optimal track...
Li, Rui; Landex, Alex; Salling, Kim Bang
The purpose of the paper is to estimate railway project construction cost based on the nominal market price. Currently, estimation of project costs within railway infrastructure procurement is particularly challenging due to; 1) construction costs highly depend on possession timeframes and duration...... and 2) railway construction work costs are not transparent in the market. This paper suggests separating the costs into 3 sub-categories: materials, labour and machinery. Evidently, the materials are further broke-down into subcomponents which then remains fixed whereas the cost of labour and machinery...... construction project, the new line to the fixed link across Fehmarn Belt, is introduced where it is shown that the non-material cost is about 19% of the total expenditure. By assuming three sets of track blocking scenarios with the same amount of construction works it is proven that given an optimal track...
National Aeronautics and Space Administration — A Multi-Depth Underwater Spectroradiometer for Validation of Remotely-Sensed Ocean Color and Estimation of Seawater Biogeochemical Properties (A) Project
Shishir B Sahay; T Meghasyam; Rahul K Roy; Gaurav Pooniwala; Sasank Chilamkurthy; Vikram Gadre
2015-06-01
This paper is targeted towards a general readership in signal processing. It intends to provide a brief tutorial exposure to the Fractional Fourier Transform, followed by a report on experiments performed by the authors on a Generalized Time Frequency Transform (GTFT) proposed by them in an earlier paper. The paper also discusses the extension of the uncertainty principle to the GTFT. This paper discusses some analytical results of the GTFT. We identify the eigenfunctions and eigenvalues of the GTFT. The time shift property of the GTFT is discussed. The paper describes methods for estimation of parameters of individual chirp signals on receipt of a noisy mixture of chirps. A priori knowledge of the nature of chirp signals in the mixture – linear or quadratic is required, as the two proposed methods fall in the category of model-dependent methods for chirp parameter estimation.
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Rezaei Niya, S. M.; Selvadurai, A. P. S.
2017-03-01
The paper presents an approach for estimating the permeability of a porous medium that is based on the characteristics of the porous structure. The pressure drop in different fluid flow passages is estimated and these are combined to evaluate the overall reduction. The theory employed is presented and the level of accuracy for different cases is discussed. The successive steps in the solution algorithm are described. The accuracy and computational efficiency of the approach are compared with results obtained from a finite-element-based multiphysics formulation. It is shown that for a comparable accuracy, the computational efficiency of the approach can be two orders of magnitude faster. Finally, the model predictions are examined with conventional relationships that have been reported in the literature and are based on permeability-porosity relationships. It is shown that estimating the permeability of a porous medium using porosity can lead to an order of magnitude error and the expected permeability range in different porosities is presented using 10 000 random structures.
Zhang, Peng; Zhou, Ning; Abdollahi, Ali
2013-09-10
A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.
Giesen, P.H.J.; Ferwerda, R.; Tijssen, R.; Mokkink, H.G.A.; Drijver, R.; Bosch, W.J.H.M. van den; Grol, R.P.T.M.
2007-01-01
BACKGROUND: In recent years, there has been a growth in the use of triage nurses to decrease general practitioner (GP) workloads and increase the efficiency of telephone triage. The actual safety of decisions made by triage nurses has not yet been assessed. OBJECTIVES: To investigate whether triage
Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2009-01-01
This paper introduces a generalization of the regression-discontinuity design (RDD). Traditionally, RDD is considered in a two-dimensional framework, with a single assignment variable and cutoff. Treatment effects are measured at a single location along the assignment variable. However, this represents a specialized (and straight-forward)…
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Rutimann, Hans; Lynn, M. Stuart
The Archivo General de Indias is operating a massive project to preserve and make accessible the contents of the 45 million documents and 7,000 maps and blueprints comprising the written heritage of Spain's 400 years in power in the Americas. The current objective is to scan about 10 percent of the archive (or about 8 million images) in…
Migration_USER, IPDS; Wiley, Michael J.; Wilcox, Douglas A.
2016-01-01
The use of diurnal water-table fluctuation methods to calculate evapotranspiration (ET) and groundwater flow is of increasing interest in ecohydrological studies. Most studies of this type, however, have been located in riparian wetlands of semi-arid regions where groundwater levels are consistently below topographic surface elevations and precipitation events are infrequent. Current methodologies preclude application to a wider variety of wetland systems. In this study, we extended a method for estimating sub-daily ET and groundwater flow rates from water-level fluctuations to fit highly dynamic, non-riparian wetland scenarios. Modifications included (1) varying the specific yield to account for periodic flooded conditions and (2) relating empirically derived ET to estimated potential ET for days when precipitation events masked the diurnal signal. To demonstrate the utility of this method, we estimated ET and groundwater fluxes over two growing seasons (2006–2007) in 15 wetlands within a ridge-and-swale wetland complex of the Laurentian Great Lakes under flooded and non-flooded conditions. Mean daily ET rates for the sites ranged from 4.0 mm d−1 to 6.6 mm d−1. Shallow groundwater discharge rates resulting from evaporative demand ranged from 2.5 mm d−1 to 4.3 mm d−1. This study helps to expand our understanding of the evapotranspirative demand of plants under various hydrologic and climate conditions. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
General upper and lower tail estimates using Malliavin calculus and Stein's equations
Eden, Richard; Viens, Frederi
2010-01-01
Following a strategy recently developed by Ivan Nourdin and Giovanni Peccati, we provide a general technique to compare the tail of a given random variable to that of a reference distribution. This enables us to give concrete conditions to ensure upper and/or lower bounds on the random variable's tail of various power or exponential types. The Nourdin-Peccati strategy analyzes the relation between Stein's method and the Malliavin calculus, and is adapted to dealing with comparisons to the Gau...
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-30
Artifacts can result when reconstructing a dynamic image sequence from inconsistent single photon emission computed tomography (SPECT) projections acquired by a slowly rotating gantry. The artifacts can lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying volumes of interest on the images. To overcome these biases in conventional image based dynamic data analysis, we have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view. In previous work we developed computationally efficient methods for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions [1] and their statistical uncertainties [2] from dynamic SPECT projection data, using a spatial segmentation and temporal B-splines. In addition, we studied the bias that results from modeling various orders of temporal continuity and using various time samplings [1]. In the present work, we use the methods developed in [1, 2] and Monte Carlo simulations to study the effects of the temporal modeling on the statistical variability of the reconstructed distributions.
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
Pravir Dutt; Satyendra Tomar
2003-11-01
In this paper we show that the ℎ- spectral element method developed in [3,8,9] applies to elliptic problems in curvilinear polygons with mixed Neumann and Dirichlet boundary conditions provided that the Babuska–Brezzi inf-sup conditions are satisfied. We establish basic stability estimates for a non-conforming ℎ- spectral element method which allows for simultaneous mesh refinement and variable polynomial degree. The spectral element functions are non-conforming if the boundary conditions are Dirichlet. For problems with mixed boundary conditions they are continuous only at the vertices of the elements. We obtain a stability estimate when the spectral element functions vanish at the vertices of the elements, which is needed for parallelizing the numerical scheme. Finally, we indicate how the mesh refinement strategy and choice of polynomial degree depends on the regularity of the coefficients of the differential operator, smoothness of the sides of the polygon and the regularity of the data to obtain the maximum accuracy achievable.
Simon, Patrick
2016-01-01
In weak gravitational lensing, weighted quadrupole moments of the brightness profile in galaxy images are a common way to estimate gravitational shear. We employ general adaptive moments (GLAM) to study causes of shear bias on a fundamental level and for a practical definition of an image ellipticity. For GLAM, the ellipticity is identical to that of isophotes of elliptical images, and this ellipticity is always an unbiased estimator of reduced shear. Our theoretical framework reiterates that moment-based techniques are similar to a model-based approach in the sense that they fit an elliptical profile to the image to obtain weighted moments. As a result, moment-based estimates of ellipticities are prone to underfitting bias. The estimation is fundamentally limited mainly by pixellation which destroys information on the original, pre-seeing image. We give an optimized estimator for the pre-seeing GLAM ellipticity and its bias for noise-free images. To deal with images where pixel noise is prominent, we conside...
解放军总医院群体工程电气设计%Electrical Design of General Hospital of PLA Project Hospital of PLA Project
王漪; 涂路; 奚传栋
2013-01-01
本文从供电可靠性、安全用电、绿色建筑等角度简要介绍了解放军总医院群体工程强电专业设计的特点。%This article describes the characteristics of electrical design of General Hospital of PLA Project, such as the reliability of power supply , electrical safety and green buildings.
Scharfenberg, Janna; Schaper, Katharina; Krummenauer, Frank
2014-01-01
The German "Dr med" plays a specific role in doctoral thesis settings since students may start the underlying doctoral project during their studies at medical school. If a Medical Faculty principally encourages this approach, then it should support the students in performing the respective projects as efficiently as possible. Consequently, it must be ensured that students are able to implement and complete a doctoral project in parallel to their studies. As a characteristic efficiency feature of these "Dr med" initiatives, the proportion of doctoral projects successfully completed shortly after graduating from medical school is proposed and illustrated. The proposed characteristic can be estimated by the time period between the state examination (date of completion of the qualifying medical examination) and the doctoral examination. Completion of the doctoral project "during their medical studies" was then characterised by a doctoral examination no later than 12 months after the qualifying medical state examination. To illustrate the estimation and interpretation of this characteristic, it was retrospectively estimated on the basis of the full sample of all doctorates successfully completed between July 2009 and June 2012 at the Department of Human Medicine at the Faculty of Health of the University of Witten/Herdecke. During the period of investigation defined, a total number of 56 doctoral examinations were documented, 30 % of which were completed within 12 months after the qualifying medical state examination (95% confidence interval 19 to 44 %). The median duration between state and doctoral examination was 27 months. The proportion of doctoral projects completed parallel to the medical studies increased during the investigation period from 14 % in the first year (July 2009 till June 2010) to 40 % in the third year (July 2011 till June 2012). Only about a third of all "Dr med" projects at the Witten/Herdecke Faculty of Health were completed during or close to
Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models
YUE Li; CHEN Xiru
2004-01-01
Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.
Low Complexity Sparse Bayesian Learning for Channel Estimation Using Generalized Mean Field
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2014-01-01
constrain the auxiliary function approximating the posterior probability density function of the unknown variables to factorize over disjoint groups of contiguous entries in the sparse vector - the size of these groups dictates the degree of complexity reduction. The original high-complexity algorithms......We derive low complexity versions of a wide range of algorithms for sparse Bayesian learning (SBL) in underdetermined linear systems. The proposed algorithms are obtained by applying the generalized mean field (GMF) inference framework to a generic SBL probabilistic model. In the GMF framework, we...
Anderson, D Mark; Elsea, David
2015-12-01
In this note, we use data from the national and state Youth Risk Behavior Surveys for the period 1999 through 2011 to estimate the relationship between the Meth Project, an anti-methamphetamine advertising campaign, and meth use among high school students. During this period, a total of eight states adopted anti-meth advertising campaigns. After accounting for pre-existing downward trends in meth use, we find little evidence that the campaign curbed meth use in the full sample. We do find, however, some evidence that the Meth Project may have decreased meth use among White high school students.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
Huang, Whitney K.; Stein, Michael L.; McInerney, David J.; Sun, Shanshan; Moyer, Elisabeth J.
2016-07-01
Changes in extreme weather may produce some of the largest societal impacts of anthropogenic climate change. However, it is intrinsically difficult to estimate changes in extreme events from the short observational record. In this work we use millennial runs from the Community Climate System Model version 3 (CCSM3) in equilibrated pre-industrial and possible future (700 and 1400 ppm CO2) conditions to examine both how extremes change in this model and how well these changes can be estimated as a function of run length. We estimate changes to distributions of future temperature extremes (annual minima and annual maxima) in the contiguous United States by fitting generalized extreme value (GEV) distributions. Using 1000-year pre-industrial and future time series, we show that warm extremes largely change in accordance with mean shifts in the distribution of summertime temperatures. Cold extremes warm more than mean shifts in the distribution of wintertime temperatures, but changes in GEV location parameters are generally well explained by the combination of mean shifts and reduced wintertime temperature variability. For cold extremes at inland locations, return levels at long recurrence intervals show additional effects related to changes in the spread and shape of GEV distributions. We then examine uncertainties that result from using shorter model runs. In theory, the GEV distribution can allow prediction of infrequent events using time series shorter than the recurrence interval of those events. To investigate how well this approach works in practice, we estimate 20-, 50-, and 100-year extreme events using segments of varying lengths. We find that even using GEV distributions, time series of comparable or shorter length than the return period of interest can lead to very poor estimates. These results suggest caution when attempting to use short observational time series or model runs to infer infrequent extremes.
Toh, K.C.; Trefethen, L.N. [Cornell Univ., Ithaca, NY (United States)
1994-12-31
What properties of a nonsymmetric matrix A determine the convergence rate of iterations such as GMRES, QMR, and Arnoldi? If A is far from normal, should one replace the usual Ritz values {r_arrow} eigenvalues notion of convergence of Arnoldi by alternative notions such as Arnoldi lemniscates {r_arrow} pseudospectra? Since Krylov subspace iterations can be interpreted as minimization processes involving polynomials of matrices, the answers to questions such as these depend upon mathematical problems of the following kind. Given a polynomial p(z), how can one bound the norm of p(A) in terms of (1) the size of p(z) on various sets in the complex plane, and (2) the locations of the spectrum and pseudospectra of A? This talk reports some progress towards solving these problems. In particular, the authors present theorems that generalize the Kreiss matrix theorem from the unit disk (for the monomial A{sup n}) to a class of general complex domains (for polynomials p(A)).
The use of cluster analysis techniques in spaceflight project cost risk estimation
Fox, G.; Ebbeler, D.; Jorgensen, E.
2003-01-01
Project cost risk is the uncertainty in final project cost, contingent on initial budget, requirements and schedule. For a proposed mission, a dynamic simulation model relying for some of its input on a simple risk elicitation is used to identify and quantify systemic cost risk.
Carbon accounting and cost estimation in forestry projects using CO2Fix V.3
Groen, T.A.; Nabuurs, G.J.; Schelhaas, M.J.
2006-01-01
Carbon and financial accounting of projects in the Land Use, Land-Use Change and Forestry sector is a topic of hot debate. Large uncertainty remains concerning the carbon dynamics, the way they should be accounted and the cost efficiency of the projects. Part of the uncertainty can be alleviated by
A generalized public goods game with coupling of individual ability and project benefit
Zhong, Li-Xin; Xu, Wen-Juan; He, Yun-Xin; Zhong, Chen-Yang; Chen, Rong-Da; Qiu, Tian; Shi, Yong-Dong; Ren, Fei
2017-08-01
Facing a heavy task, any single person can only make a limited contribution and team cooperation is needed. As one enjoys the benefit of the public goods, the potential benefits of the project are not always maximized and may be partly wasted. By incorporating individual ability and project benefit into the original public goods game, we study the coupling effect of the four parameters, the upper limit of individual contribution, the upper limit of individual benefit, the needed project cost and the upper limit of project benefit on the evolution of cooperation. Coevolving with the individual-level group size preferences, an increase in the upper limit of individual benefit promotes cooperation while an increase in the upper limit of individual contribution inhibits cooperation. The coupling of the upper limit of individual contribution and the needed project cost determines the critical point of the upper limit of project benefit, where the equilibrium frequency of cooperators reaches its highest level. Above the critical point, an increase in the upper limit of project benefit inhibits cooperation. The evolution of cooperation is closely related to the preferred group-size distribution. A functional relation between the frequency of cooperators and the dominant group size is found.
Cost benchmarking of railway projects in Europe – dealing with uncertainties in cost estimates
Trabo, Inara
transport infrastructure projects, 9 projects out of 10 came out with budget overruns. As an example of cost overruns is the High Speed 1 in UK, the railway line between London and the British end of the Channel Tunnel. The project was delayed for 11 months and final construction costs were escalated to 80......Past experiences in the construction of high-speed railway projects demontrate either positive or negative financial outcomes of the actual project’s budget. Usually some uncertainty value is included into initial budget calculations. Uncertainty is related to the increase of material prices......, Italian projects have productive experiences in constructing and operating high-speed railway lines. The case study for this research is the first Danish high-speed railway line “The New Line Copenhagen-Ringsted”. The project’s aim is to avoid cost overruns and even make lower the final budget outcomes...
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Mahowald, Natalie [Cornell Univ., Ithaca, NY (United States)
2016-11-29
Soils in natural and managed ecosystems and wetlands are well known sources of methane, nitrous oxides, and reactive nitrogen gases, but the magnitudes of gas flux to the atmosphere are still poorly constrained. Thus, the reasons for the large increases in atmospheric concentrations of methane and nitrous oxide since the preindustrial time period are not well understood. The low atmospheric concentrations of methane and nitrous oxide, despite being more potent greenhouse gases than carbon dioxide, complicate empirical studies to provide explanations. In addition to climate concerns, the emissions of reactive nitrogen gases from soils are important to the changing nitrogen balance in the earth system, subject to human management, and may change substantially in the future. Thus improved modeling of the emission fluxes of these species from the land surface is important. Currently, there are emission modules for methane and some nitrogen species in the Community Earth System Model’s Community Land Model (CLM-ME/N); however, there are large uncertainties and problems in the simulations, resulting in coarse estimates. In this proposal, we seek to improve these emission modules by combining state-of-the-art process modules for emissions, available data, and new optimization methods. In earth science problems, we often have substantial data and knowledge of processes in disparate systems, and thus we need to combine data and a general process level understanding into a model for projections of future climate that are as accurate as possible. The best methodologies for optimization of parameters in earth system models are still being developed. In this proposal we will develop and apply surrogate algorithms that a) were especially developed for computationally expensive simulations like CLM-ME/N models; b) were (in the earlier surrogate optimization Stochastic RBF) demonstrated to perform very well on computationally expensive complex partial differential equations in
Evaluation of generalized degrees of freedom for sparse estimation by replica method
Sakata, A.
2016-12-01
We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.
Conrad, Douglas; Lee, Rosanna; Milgrom, Peter; Huebner, Colleen
2009-06-01
Shifts in payment options for dental care over several decades have resulted in more dental expenditures being paid through health maintenance organizations (HMOs), preferred provider organizations (PPOs), and capitation arrangements. Patients' and employers' choices to participate in these arrangements is determined in part by dentists' willingness to participate in plans, and plan choices may be influenced by patient satisfaction, self-reported oral health, and/or quality or cost of care. This study examined determinants of dentists' decisions to accept capitation payment for services. Cross-sectional mail survey in December 2006. 1605 general dentists in Oregon. Questions addressed dentists' perceptions of the importance of control over various practice parameters, willingness to accept capitation payment, employment or ownership status within the practice, and practice characteristics. Capitation was accepted by 22.6% of the respondent dentists (n = 729). Reported average fees (2007 dollars) ranged from $60 (initial oral examination) to approximately $800 (porcelain crowns). The likelihood of accepting capitation payment was related to the number of dentists in the practice, but surprisingly owner-dentists were no less likely than employee-dentists (associates) to accept capitation. As expected, dentists' usual and customary fees were negatively associated with accepting capitation. In contrast, measures of dentists' importance of control were not related to decisions about capitation. Longer average appointment delays were related to acceptance of capitation, but the effects were small. Dentists' behavior regarding payment acceptance is generally consistent with microeconomic theory of provider behavior. Study findings should inform practitioners, plan managers, and researchers in examining dentist payment decisions.
Nengjun Yi
2011-12-01
Full Text Available Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/.
Yi, Nengjun; Liu, Nianjun; Zhi, Degui; Li, Jun
2011-01-01
Complex diseases and traits are likely influenced by many common and rare genetic variants and environmental factors. Detecting disease susceptibility variants is a challenging task, especially when their frequencies are low and/or their effects are small or moderate. We propose here a comprehensive hierarchical generalized linear model framework for simultaneously analyzing multiple groups of rare and common variants and relevant covariates. The proposed hierarchical generalized linear models introduce a group effect and a genetic score (i.e., a linear combination of main-effect predictors for genetic variants) for each group of variants, and jointly they estimate the group effects and the weights of the genetic scores. This framework includes various previous methods as special cases, and it can effectively deal with both risk and protective variants in a group and can simultaneously estimate the cumulative contribution of multiple variants and their relative importance. Our computational strategy is based on extending the standard procedure for fitting generalized linear models in the statistical software R to the proposed hierarchical models, leading to the development of stable and flexible tools. The methods are illustrated with sequence data in gene ANGPTL4 from the Dallas Heart Study. The performance of the proposed procedures is further assessed via simulation studies. The methods are implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). PMID:22144906
Asymptotic scaling properties and estimation of the generalized Hurst exponents in financial data
Buonocore, R. J.; Aste, T.; Di Matteo, T.
2017-04-01
We propose a method to measure the Hurst exponents of financial time series. The scaling of the absolute moments against the aggregation horizon of real financial processes and of both uniscaling and multiscaling synthetic processes converges asymptotically towards linearity in log-log scale. In light of this we found appropriate a modification of the usual scaling equation via the introduction of a filter function. We devised a measurement procedure which takes into account the presence of the filter function without the need of directly estimating it. We verified that the method is unbiased within the errors by applying it to synthetic time series with known scaling properties. Finally we show an application to empirical financial time series where we fit the measured scaling exponents via a second or a fourth degree polynomial, which, because of theoretical constraints, have respectively only one and two degrees of freedom. We found that on our data set there is not clear preference between the second or fourth degree polynomial. Moreover the study of the filter functions of each time series shows common patterns of convergence depending on the momentum degree.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Enhanced Path Planning, Guidance, and Estimation Algorithms for NASA's GMAT Project
National Aeronautics and Space Administration — Advanced trajectory design and estimation capabilities in complex nonlinear dynamical regimes represent two of the greatest technical challenges of modern space...
Enhanced Path Planning, Guidance, and Estimation Algorithms for NASA's GMAT Project
National Aeronautics and Space Administration — Advanced trajectory design and estimation capabilities in complex nonlinear dynamical regimes represent two of the greatest technical challenges of modern space...
Dilip C Nath
2011-07-01
Full Text Available The Quasi-Least Squares (QLS is useful for different correlation structure with attachment of Generalized Estimating Equation (GEE. The purpose of this work is to compare the regression parameter in the presence of different correlation structure with respect to GEE and QLS method. The comparison of estimated regression parameter has been performed in clinical trial data set; studying the effect of drug treatment (metformin with pioglitazone Vs (gliclazide with pioglitazone in type 2 diabetes patients. In case of QLS, the correlation coefficient of post-parandinal blood sugar (PPBS under tridiagonal correlation is 0.008 while it failed to produce by GEE. It has been found that the combination of metformin with pioglitazone is more effective as compared to the combination of gliclazide with pioglitazone.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2011-11-01
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
The QOL-DASS Model to Estimate Overall Quality of Life and General Health
Mehrdad Mazaheri
2011-01-01
Full Text Available "n Objective: In order to find how rating the WHOQOL-BREF and DASS scales are combined to produce an overall measure of quality of life and satisfaction with health rating, a QOL-DASS model was designed ; and the strength of this hypothesized model was examined using the structural equation modeling "n "nMethod: Participants included a sample of 103 voluntary males who were divided into two groups of unhealthy (N=55 and healthy (N=48. To assess satisfaction and negative emotions of depression, anxiety and stress among the participants, they were asked to fill out the WHOQOLBREF and The Depression Anxiety Stress Scale (DASS-42. "nResults: Our findings on running the hypothesized model of QOL-DASS indicated that the proposed model of QOL-DASS fitted the data well for the both healthy and unhealthy groups "nConclusion: Our findings with CFA to evaluate the hypothesized model of QOL-DASS indicated that the different satisfaction domain ratings and the negative emotions of depression, anxiety and stress as the observed variables can represent the underlying constructs of general health and quality of life on both healthy and unhealthy groups.
Blazer, Christie; Froman, Terry; Romanik, Dale
2009-01-01
The 2010-11 projected enrollment offered by Research Services represents a small increase in student enrollment. The District's student enrollment is projected to be 341,324 in 2010-11, an increase of 0.3 percent (1,077 students) from 2009-10. A slight increase in the District's 2009-10 student enrollment reversed a seven year decline. (Contains 3…
Tenza, E; Valero, R; Arraez, V
2017-04-01
To evaluate the number and characteristics of potential organ donors among cardiocirculatory death cases. A retrospective observational study was made of individuals between 15-65 years of age who died in the period 2006-2014 in Elche University General Hospital (Alicante, Spain). A univariate analysis and binary logistic regression predictive model were performed to discriminate factors related to donation contraindication. Identification of patients with donation contraindication. Of the 1510 patients who died in the mentioned period, 1048 were excluded due to the application of exclusion criteria; 86 due to evolution towards brain death; and 20 due to losses. A total of 356 patients were analyzed, divided into two groups: 288 in non-heart beating donation II and 68 in non-heart beating donation III. Seventy patients were found to be potential non-heart beating donation II and 10 were found to be potential non-heart beating donation III, which could increase donation activity by 8-9 donors a year. The patients died in the ICU, Resuscitation, Emergency Care, Internal Medicine, Digestive Diseases and Neurology. The following protective factors against organ donation contraindication were identified: death in Emergency Care, cardiorespiratory arrest before or during admission, and heart, respiratory and neurological disease as the cause of admission. Death in Internal Medicine was associated to an increased risk of donation contraindication. Implementing a non-heart beating donation protocol in our hospital could increase the donation potential by 8-9 donors a year. Copyright © 2016 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Dai Hao; Jia Li-Xin; Zhang Yan-Bin
2012-01-01
The adaptive generalized matrix projective lag synchronization between two different complex networks with non-identical nodes and different dimensions is investigated in this paper.Based on Lyapunov stability theory and Barbalat's lemma,generalized matrix projective lag synchronization criteria are derived by using the adaptive control method.Furthermore,each network can be undirected or directed,connected or disconnected,and nodes in either network may have identical or different dynamics.The proposed strategy is applicable to almost all kinds of complex networks.In addition,numerical simulation results are presented to illustrate the effectiveness of this method,showing that the synchronization speed is sensitively influenced by the adaptive law strength,the network size,and the network topological structure.
White, J Wilson; Nickols, Kerry J; Malone, Daniel; Carr, Mark H; Starr, Richard M; Cordoleani, Flora; Baskett, Marissa L; Hastings, Alan; Botsford, Louis W
2016-12-01
Integral projection models (IPMs) have a number of advantages over matrix-model approaches for analyzing size-structured population dynamics, because the latter require parameter estimates for each age or stage transition. However, IPMs still require appropriate data. Typically they are parameterized using individual-scale relationships between body size and demographic rates, but these are not always available. We present an alternative approach for estimating demographic parameters from time series of size-structured survey data using a Bayesian state-space IPM (SSIPM). By fitting an IPM in a state-space framework, we estimate unknown parameters and explicitly account for process and measurement error in a dataset to estimate the underlying process model dynamics. We tested our method by fitting SSIPMs to simulated data; the model fit the simulated size distributions well and estimated unknown demographic parameters accurately. We then illustrated our method using nine years of annual surveys of the density and size distribution of two fish species (blue rockfish, Sebastes mystinus, and gopher rockfish, S. carnatus) at seven kelp forest sites in California. The SSIPM produced reasonable fits to the data, and estimated fishing rates for both species that were higher than our Bayesian prior estimates based on coast-wide stock assessment estimates of harvest. That improvement reinforces the value of being able to estimate demographic parameters from local-scale monitoring data. We highlight a number of key decision points in SSIPM development (e.g., open vs. closed demography, number of particles in the state-space filter) so that users can apply the method to their own datasets. © 2016 by the Ecological Society of America.
Dettmers, Dana Lee; Eide, Steven Arvid
2002-10-01
An analysis of completed decommissioning projects is used to construct predictive estimates for worker exposure to radioactivity during decommissioning activities. The preferred organizational method for the completed decommissioning project data is to divide the data by type of facility, whether decommissioning was performed on part of the facility or the complete facility, and the level of radiation within the facility prior to decommissioning (low, medium, or high). Additional data analysis shows that there is not a downward trend in worker exposure data over time. Also, the use of a standard estimate for worker exposure to radioactivity may be a best estimate for low complete storage, high partial storage, and medium reactor facilities; a conservative estimate for some low level of facility radiation facilities (reactor complete, research complete, pits/ponds, other), medium partial process facilities, and high complete research facilities; and an underestimate for the remaining facilities. Limited data are available to compare different decommissioning alternatives, so the available data are reported and no conclusions can been drawn. It is recommended that all DOE sites and the NRC use a similar method to document worker hours, worker exposure to radiation (person-rem), and standard industrial accidents, injuries, and deaths for all completed decommissioning activities.
Zheng, Yong-Ai
2012-02-01
Time-delay Takagi-Sugeno fuzzy drive-response dynamical networks (TD-TSFDRDNs) are defined by extending the drive-response dynamical networks. Based on the LaSalle invariant principle, a simple and systematic adaptive control scheme is proposed to synchronize the TD-TSFDRDNs with a desired scalar factor. A sufficient condition for the generalized projective synchronization in TD-TSFDRDNs is derived. Moreover, numerical simulations are provided to verify the correctness and effectiveness of the scheme.
Wen, Dao-Jun
2013-01-01
In this paper, a Meir-Keeler contraction is introduced to propose a viscosity-projection approximation method for finding a common element of the set of solutions of a family of general equilibrium problems and the set of fixed points of asymptotically strict pseudocontractions in the intermediate sense. Strong convergence of the viscosity iterative sequences is obtained under some suitable conditions. Results presented in this paper extend and unify the previously known results announced by many other authors.
Wang, Dun; Takeuchi, Nozomu; Kawakatsu, Hitoshi; Mori, Jim
2017-04-01
With the recent establishment of regional dense seismic arrays (e.g., Hi-net in Japan, USArray in the North America), advanced digital data processing has enabled improvement of back-projection methods that have become popular and are widely used to track the rupture process of moderate to large earthquakes. Back-projection methods can be classified into two groups, one using time domain analyses, and the other frequency domain analyses. There are minor technique differences in both groups. Here we focus on the back-projection performed in the time domain using seismic waveforms recorded at teleseismic distances (30-90 degree). For the standard back-projection (Ishii et al., 2005), teleseismic P waves that are recorded on vertical components of a dense seismic array are analyzed. Since seismic arrays have limited resolutions and we make several assumptions (e.g., only direct P waves at the observed waveforms, and every trace has completely identical waveform), the final images from back-projections show the stacked amplitudes (or correlation coefficients) that are often smeared in both time and space domains. Although it might not be difficult to reveal overall source processes for a giant seismic source such as the 2004 Mw 9.0 Sumatra earthquake where the source extent is about 1400 km (Ishii et al., 2005; Krüger and Ohrnberger, 2005), there are more problems in imaging detailed processes of earthquakes with smaller source dimensions, such as a M 7.5 earthquake with a source extent of 100-150 km. For smaller earthquakes, it is more difficult to resolve space distributions of the radiated energies. We developed a new inversion method, Image Deconvolution Back-Projection (IDBP) to determine the sources of high frequency energy radiation by linear inversion of observed images from a back-projection approach. The observed back-projection image for multiple sources is considered as a convolution of the image of the true radiated energy and the array response for a
Larose, Tricia L; Adak, Goutam K; Evans, Meirion R; Tam, Clarence C
2016-01-01
Objective To generate estimates of the burden of UK-acquired foodborne disease accounting for uncertainty. Design A modelling study combining data from national public health surveillance systems for laboratory-confirmed infectious intestinal disease (IID) and outbreaks of foodborne disease and 2 prospective, population-based studies of IID in the community. The underlying data sets covered the time period 1993–2008. We used Monte Carlo simulation and a Bayesian approach, using a systematic review to generate Bayesian priors. We calculated point estimates with 95% credible intervals (CrI). Setting UK, 2009. Outcome measures Pathogen-specific estimates of the number of cases, general practice (GP) consultations and hospitalisations for foodborne disease in the UK in 2009. Results Bayesian approaches gave slightly more conservative estimates of overall health burden (∼511 000 cases vs 566 000 cases). Campylobacter is the most common foodborne pathogen, causing 280 400 (95% CrI 182 503–435 693) food-related cases and 38 860 (95% CrI 27 160–55 610) GP consultations annually. Despite this, there are only around 562 (95% CrI 189–1330) food-related hospital admissions due to Campylobacter, reflecting relatively low disease severity. Salmonella causes the largest number of hospitalisations, an estimated 2490 admissions (95% CrI 607–9631), closely followed by Escherichia coli O157 with 2233 admissions (95% CrI 170–32 159). Other common causes of foodborne disease include Clostridium perfringens, with an estimated 79 570 cases annually (95% CrI 30 700–211 298) and norovirus with 74 100 cases (95% CrI 61 150–89 660). Other viruses and protozoa ranked much lower as causes of foodborne disease. Conclusions The 3 models yielded similar estimates of the burden of foodborne illness in the UK and show that continued reductions in Campylobacter, Salmonella, E. coli O157, C. perfringens and norovirus are needed to mitigate the impact of
Vitale, Salvatore
2016-07-01
With the discovery of the binary-black-hole (BBH) coalescence GW150914 the era of gravitational-wave (GW) astronomy has started. It has recently been shown that BBH with masses comparable to or higher than GW150914 would be visible in the Evolved Laser Interferometer Space Antenna (eLISA) band a few years before they finally merge in the band of ground-based detectors. This would allow for premerger electromagnetic alerts, dramatically increasing the chances of a joint detection, if BBHs are indeed luminous in the electromagnetic band. In this Letter we explore a quite different aspect of multiband GW astronomy, and verify if, and to what extent, measurement of masses and sky position with eLISA could improve parameter estimation and tests of general relativity with ground-based detectors. We generate a catalog of 200 BBHs and find that having prior information from eLISA can reduce the uncertainty in the measurement of source distance and primary black hole spin by up to factor of 2 in ground-based GW detectors. The component masses estimate from eLISA will not be refined by the ground based detectors, whereas joint analysis will yield precise characterization of the newly formed black hole and improve consistency tests of general relativity.
Oyarzabal, Julen; Pastor, Joaquin; Howe, Trevor J
2009-12-01
The quality of in vitro data used to build in silico absorption, distribution, metabolism, and toxicity (ADMET) models is, in many cases, inconsistent. The paucity of data from single laboratory sources has led to the mixing of data sets with varying experimental conditions and to the coverage of restricted chemical space in models which are purported to be of general applicability. In order to overcome these shortcomings, a method, Metropolis/Monte Carlo adaptive ranking simulation (MARS) has been developed. This aims to estimate "optimal flexible threshold points" in order to achieve better correlation between any in silico ADMET model and any discrete qualitative experimental data. The MARS method covers three key factors: the predictive model, the experimental procedure for the assay, and the chemical series or scaffold. When large and general solubility data sets (>650 compounds) are analyzed against commercially available in silico models, using MARS, an improvement in kappa statistics up to 16.2% is obtained. When particular chemical series are addressed, improvements up to 46.0% are seen on kappa statistics. This coefficient then allows an investigation into the effectiveness of a classifier by assessing the improvement over chance. These improvements in ranking estimations allow more predictive decision-making for virtual libraries.
Vitale, Salvatore
2016-07-29
With the discovery of the binary-black-hole (BBH) coalescence GW150914 the era of gravitational-wave (GW) astronomy has started. It has recently been shown that BBH with masses comparable to or higher than GW150914 would be visible in the Evolved Laser Interferometer Space Antenna (eLISA) band a few years before they finally merge in the band of ground-based detectors. This would allow for premerger electromagnetic alerts, dramatically increasing the chances of a joint detection, if BBHs are indeed luminous in the electromagnetic band. In this Letter we explore a quite different aspect of multiband GW astronomy, and verify if, and to what extent, measurement of masses and sky position with eLISA could improve parameter estimation and tests of general relativity with ground-based detectors. We generate a catalog of 200 BBHs and find that having prior information from eLISA can reduce the uncertainty in the measurement of source distance and primary black hole spin by up to factor of 2 in ground-based GW detectors. The component masses estimate from eLISA will not be refined by the ground based detectors, whereas joint analysis will yield precise characterization of the newly formed black hole and improve consistency tests of general relativity.
Pozdniakov, Sergey P.; Wang, Ping; Lekhov, Mikhail V.
2016-09-01
The well-known Hvorslev (1951) formula was developed to estimate soil permeability using single-well slug tests and has been widely applied to determine riverbed hydraulic conductivity using in situ standpipe permeameter tests. Here, we further develop a general solution of the Hvorslev (1951) formula that accounts for flow in a bounded medium and assumes that the bottom of the river is a prescribed head boundary. The superposition of real and imaginary disk sources is used to obtain a semi-analytical expression of the total hydraulic resistance of the flow in and out of the pipe. As a result, we obtained a simple semi-analytical expression for the resistance, which represents a generalization of the Hvorslev (1951). The obtained expression is benchmarked against a finite-element numerical model of 2-D flow (in r-z coordinates) in an anisotropic medium. The results exhibit good agreement between the simulated and estimated riverbed hydraulic conductivity values. Furthermore, a set of simulations for layered, stochastically heterogeneous riverbed sediments was conducted and processed using the proposed expression to demonstrate the potential associated with measuring vertical heterogeneity in bottom sediments using a series of standpipe permeameter tests with different lengths of pipe inserted into the riverbed sediments.
National Aeronautics and Space Administration — Recent work on distributed multi-spacecraft systems has resulted in a number of architectures and algorithms for accurate estimation of spacecraft and formation...
National Aeronautics and Space Administration — The problem of estimating the aerodynamic models for flight control of damaged aircraft using an innovative differential vortex lattice method tightly coupled with...
National Aeronautics and Space Administration — Estimation of aerodynamic models for the control of damaged aircraft using an innovative differential vortex lattice method tightly coupled with an extended Kalman...
Aggregation-iterative analogues and generalizations of projection-iterative methods
Shuvar B.F.
2013-06-01
Full Text Available Aggregation-iterative algorithms for linear operator equations are constructed and investigated. These algorithms cover methods of iterative aggregation and projection-iterative methods. In convergence conditions there is neither requirement for the corresponding operator of fixed sign no restriction to the spectral radius to be less than one.
A Generalized SOC-OCV Model for Lithium-Ion Batteries and the SOC Estimation for LNMCO Battery
Caiping Zhang
2016-11-01
Full Text Available A state-of-charge (SOC versus open-circuit-voltage (OCV model developed for batteries should preferably be simple, especially for real-time SOC estimation. It should also be capable of representing different types of lithium-ion batteries (LIBs, regardless of temperature change and battery degradation. It must therefore be generic, robust and adaptive, in addition to being accurate. These challenges have now been addressed by proposing a generalized SOC-OCV model for representing a few most widely used LIBs. The model is developed from analyzing electrochemical processes of the LIBs, before arriving at the sum of a logarithmic, a linear and an exponential function with six parameters. Values for these parameters are determined by a nonlinear estimation algorithm, which progressively shows that only four parameters need to be updated in real time. The remaining two parameters can be kept constant, regardless of temperature change and aging. Fitting errors demonstrated with different types of LIBs have been found to be within 0.5%. The proposed model is thus accurate, and can be flexibly applied to different LIBs, as verified by hardware-in-the-loop simulation designed for real-time SOC estimation.
Pandit, J J; Andrade, J; Bogod, D G; Hitchman, J M; Jonker, W R; Lucas, N; Mackay, J H; Nimmo, A F; O'Connor, K; O'Sullivan, E P; Paul, R G; Palmer, J H M G; Plaat, F; Radcliffe, J J; Sury, M R J; Torevell, H E; Wang, M; Hainsworth, J; Cook, T M
2014-10-01
We present the main findings of the 5th National Audit Project (NAP5) on accidental awareness during general anaesthesia (AAGA). Incidences were estimated using reports of accidental awareness as the numerator, and a parallel national anaesthetic activity survey to provide denominator data. The incidence of certain/probable and possible accidental awareness cases was ~1:19,600 anaesthetics (95% confidence interval 1:16,700-23,450). However, there was considerable variation across subtypes of techniques or subspecialities. The incidence with neuromuscular block (NMB) was ~1:8200 (1:7030-9700), and without, it was ~1:135,900 (1:78,600-299,000). The cases of AAGA reported to NAP5 were overwhelmingly cases of unintended awareness during NMB. The incidence of accidental awareness during Caesarean section was ~1:670 (1:380-1300). Two-thirds (82, 66%) of cases of accidental awareness experiences arose in the dynamic phases of anaesthesia, namely induction of and emergence from anaesthesia. During induction of anaesthesia, contributory factors included: use of thiopental, rapid sequence induction, obesity, difficult airway management, NMB, and interruptions of anaesthetic delivery during movement from anaesthetic room to theatre. During emergence from anaesthesia, residual paralysis was perceived by patients as accidental awareness, and commonly related to a failure to ensure full return of motor capacity. One-third (43, 33%) of accidental awareness events arose during the maintenance phase of anaesthesia, mostly due to problems at induction or towards the end of anaesthesia. Factors increasing the risk of accidental awareness included: female sex, age (younger adults, but not children), obesity, anaesthetist seniority (junior trainees), previous awareness, out-of-hours operating, emergencies, type of surgery (obstetric, cardiac, thoracic), and use of NMB. The following factors were not risk factors for accidental awareness: ASA physical status, race, and use or omission
An approach to estimation of degree of customization for ERP projects using prioritized requirements
Parthasarathy, Sudhaman; Daneva, Maia
2016-01-01
Customization in ERP projects is a risky, but unavoidable undertaking that companies need to initiate in order to achieve alignment between their acquired ERP solution and their organizational goals and business processes. Conscious about the risks, many companies commit to leveraging the
An approach to estimation of degree of customization for ERP projects using prioritized requirements
Parthasarathy, Sudhaman; Daneva, Maya
2016-01-01
Customization in ERP projects is a risky, but unavoidable undertaking that companies need to initiate in order to achieve alignment between their acquired ERP solution and their organizational goals and business processes. Conscious about the risks, many companies commit to leveraging the off-the-sh
Pearson, Walter H.; Williams, Greg D.; Skalski, John R.
2002-12-01
The studies reported here focus on issues regarding the entrainment of Dungeness crab related to the proposed Columbia River Channel Improvement Project and provided direct measurements of crab entrainment rates at three locations (Desdomona Shoals, Upper Sands, and Miller Sands) from RM4 to RM24 during summer 2002. Entrainment rates for all age classes of crabs ranged from zero at Miller Sands to 0.224 crabs per cy at Desdemona Shoals in June 2002. The overall entrainment rate at Desdomona Shoals in September was 0.120 crabs per cy. A modified Dredge Impact Model (DIM) used the summer 2002 entrainment rates to project crab entrainment and adult equivalent loss and loss to the fishery for the Channel Improvement Project. To improve the projections, entrainment data from Flavel Bar is needed. The literature, analyses of salinity intrusion scenarios, and the summer 2002 site-specific data on entrainment and salinity all indicate that bottom salinity influences crab distribution and entrainment, especially at lower salinities. It is now clear from field measurements of entrainment rates and salinity during a period of low river flow (90-150 Kcfs) and high salinity intrusion that entrainment rates are zero where bottom salinity is less than 16 o/oo most of the time. Further, entrainment rates of 2+ and older crab fall with decreasing salinity in a clear and consistent manner. More elaboration of the crab distribution- salinity model, especially concerning salinity and the movements of 1+ crab, is needed.
Smith, Richard D; Keogh-Brown, Marcus R; Barnett, Tony
2011-07-01
There is concern regarding the impact that a global infectious disease pandemic might have, especially the economic impact in the current financial climate. However, preparedness planning concentrates more upon population health and maintaining a functioning health sector than on the wider economic impact. We developed a single country Computable General Equilibrium model to estimate the economic impact of pandemic influenza (PI) and associated policies. While the context for this development was the United Kingdom, there are lessons to be drawn for application of this methodology, as well as indicative results, to other contexts. Disease scenarios were constructed from an epidemiological model which estimated case fatality rates (mild, moderate and severe) as 0.06%, 0.18% and 0.35%. A clinical attack rate of 35% was also used to produce influenza scenarios, together with preparedness policies, including antivirals and school closure, and the possible prophylactic absence of workers. U.K. cost estimates (in Sterling) are presented, together with relative percentage impacts applicable to similar large economies. Percentage/cost estimates suggest PI would reduce GDP by 0.3% (£ 3.5 bn), 0.4% (£ 5 bn) and 0.6% (£ 7.4 bn) respectively for the three disease scenarios. However, the impact of PI itself is smaller than disease mitigation policies: combining school closure with prophylactic absenteeism yields percentage/cost effects of 1.1% (£ 14.7 bn), 1.3% (£ 16.3 bn) and 1.4% (£ 18.5 bn) respectively for the three scenarios. Sensitivity analysis shows little variability with changes in disease parameters but notable changes with variations in school closure and prophylactic absenteeism. The most severe sensitivity scenario results in a 2.9% (£ 37.4 bn), 3.2% (£ 41.4 bn) and 3.7% (£ 47.5 bn) loss to GDP respectively for the three scenarios.
Fang, Xuekun; Hu, Xia; Janssens-Maenhout, Greet; Wu, Jing; Han, Jiarui; Su, Shenshen; Zhang, Jianbo; Hu, Jianxin
2013-04-16
Sulfur hexafluoride (SF6) is the most potent greenhouse gas regulated under the Kyoto Protocol, with a high global warming potential. In this study, SF6 emissions from China were inventoried for 1990-2010 and projected to 2020. Results reveal that the highest SF6 emission contribution originates from the electrical equipment sector (about 70%), followed by the magnesium production sector, the semiconductor manufacture sector and the SF6 production sector (each about 10%). Both agreements and discrepancies were found in comparisons of our estimates with previously published data. An accelerated growth rate was found for Chinese SF6 emissions during 1990-2010. Because the relative growth rate of SF6 emissions is estimated to be much higher than those of CO2, CH4, and N2O, SF6 will play an increasing role in greenhouse gas emissions in China. Global contributions from China increased rapidly from 0.9 ± 0.3% in 1990 to 22.8 ± 6.3% in 2008, making China one of the crucial contributors to the recent growth in global emissions. Under the examined Business-as-usual (BAU) Scenario, projected emissions will reach 4270 ± 1020 t in 2020, but a reduction of about 90% of the projected BAU emissions would be obtained under the Alternative Scenario.
Ring, Christoph; Pollinger, Felix; Kaspar-Ott, Irena; Hertig, Elke; Jacobeit, Jucundus; Paeth, Heiko
2017-04-01
The COMEPRO project (Comparison of Metrics for Probabilistic Climate Change Projections of Mediterranean Precipitation), funded by the Deutsche Forschungsgemeinschaft (DFG), is dedicated to the development of new evaluation metrics for state-of-the-art climate models. Further, we analyze implications for probabilistic projections of climate change. This study focuses on the results of 4-field matrix metrics. Here, six different approaches are compared. We evaluate 24 models of the Coupled Model Intercomparison Project Phase 3 (CMIP3), 40 of CMIP5 and 18 of the Coordinated Regional Downscaling Experiment (CORDEX). In addition to the annual and seasonal precipitation the mean temperature is analysed. We consider both 50-year trend and climatological mean for the second half of the 20th century. For the probabilistic projections of climate change A1b, A2 (CMIP3) and RCP4.5, RCP8.5 (CMIP5,CORDEX) scenarios are used. The eight main study areas are located in the Mediterranean. However, we apply our metrics to globally distributed regions as well. The metrics show high simulation quality of temperature trend and both precipitation and temperature mean for most climate models and study areas. In addition, we find high potential for model weighting in order to reduce uncertainty. These results are in line with other accepted evaluation metrics and studies. The comparison of the different 4-field approaches reveals high correlations for most metrics. The results of the metric-weighted probabilistic density functions of climate change are heterogeneous. We find for different regions and seasons both increases and decreases of uncertainty. The analysis of global study areas is consistent with the regional study areas of the Medeiterrenean.
General overview of the AxialT project: A partnership for low head turbine developments
Deschenes, C; Ciocan, G D [Laval University, Quebec, QC (Canada); Henau, V De [Alstom Hydro Canada, Tracy, QC (Canada); Flemming, F; Qian, R [Voith Hydro, York, PA, USA and Montreal, Qc (Canada); Huang, J [CanmetENERGY of Natural Resources Canada (Canada); Koller, M; Vu, T [Andritz Hydro, Zuerich, Switzerland and Pointe-Claire, QC (Canada); Naime, F A [Edelca (Venezuela, Bolivarian Republic of); Page, M, E-mail: Felix.Flemming@Voith.co, E-mail: Marcel.Koller@andritz.co, E-mail: Claire.Deschenes@gmc.ulaval.c [Hydro-Quebec, Varennes, QC (Canada)
2010-08-15
An overview of the AxialT project is presented. Initiated in 2007 by the Consortium on Hydraulic Machines, the aim of this four years project is to contribute to the study of time-dependent hydraulic phenomena in a propeller turbine. The geometry of the entire turbine is generously shared by all partners. Numerical simulations carried out by all partners are confronted with experimental measurements carried out at the LAMH laboratory in Laval University. A mix of 2D LDA, 3D PIV and unsteady pressure measurements are adapted to yield precise measurements at eight strategic locations within the turbine and for nine operating points. Phase resolved analysis is performed wherever applicable. An illustration of potential analysis accessible with the database is shown for the identification of a vortex in the runner at part load.
Guo, J; Booth, M; Jenkins, J; Wang, H; Tanner, M
1998-12-01
The World Bank Loan Project for schistosomiasis in China commenced field activities in 1992. In this paper, we describe disease control strategies for levels of different endemicity, and estimate unit costs and total expenditure of screening, treatment (cattle and humans) and snail control for 8 provinces where Schistosoma japonicum infection is endemic. Overall, we estimate that more than 21 million US dollars were spent on field activities during the first three years of the project. Mollusciciding (43% of the total expenditure) and screening (28% of the total) are estimated to have the most expensive field activities. However, despite the expense of screening, a simple model predicts that selective chemotherapy could have been cheaper than mass chemotherapy in areas where infection prevalence was higher than 15%, which was the threshold for mass chemotherapy intervention. It is concluded that considerable cost savings could be made in the future by narrowing the scope of snail control activities, redefining the threshold infection prevalence for mass chemotherapy, defining smaller administrative units, and developing rapid assessment tools.
Wenzel, Thomas J
2006-01-01
The laboratory component of a first-semester general chemistry course for science majors is described. The laboratory involves a semester-long project undertaken in a small-group format. Students are asked to examine whether plants grown in soil contaminated with lead take up more lead than those grown in uncontaminated soil. They are also asked to examine whether the acidity of the rainwater affects the amount of lead taken up by the plants. Groups are then given considerable independence in the design and implementation of the experiment. Once the seeds are planted, which takes about 4 wk into the term, several shorter experiments are integrated in before it is time to harvest and analyze the plants. The use of a project and small working groups allows for the development of a broader range of learning outcomes than occurs in a "traditional" general chemistry laboratory. The nature of these outcomes and some of the student responses to the laboratory experience are described. This particular project also works well at demonstrating the connections among chemistry, biology, geology, and environmental studies.
A matrix projection method for on line stable estimation of 1D and 3D shear building models
Angel García-Illescas, Miguel; Alvarez-Icaza, Luis
2016-12-01
An estimation method is presented that combines the use of recursive least squares, a matrix parameterized model, Gershgorin circles and tridiagonal matrices properties to allow the identification of stable shear building models in the presence of low excitation or low damping. The resultant scheme yields a significant reduction on the number of calculations involved, when compared with the standard vector parameterization based schemes. As real buildings are always open loop stable, the use of an stable shear building model for vibration control purposes allows the design of more robust control laws. Extensive simulation results are presented for cases of low excitation comparing the results of using or not this matrix projection method with different sets of initial conditions. Results indicate that the use of this projection method does not have an influence in the recovery of natural frequencies, however, it significantly improves the recovery of mode shapes.
Heikkilä, M.; Solaimani, H. (Sam); Kuivaniemi, L.; Suoranta, M.
2014-01-01
Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector
Market projections of cellulose nanomaterial-enabled products-- Part 2: Volume estimates
John Cowie; E.M. (Ted) Bilek; Theodore H. Wegner; Jo Anne Shatkin
2014-01-01
Nanocellulose has enormous potential to provide an important materials platform in numerous product sectors. This study builds on previous work by the same authors in which likely high-volume, low-volume, and novel applications for cellulosic nanomaterials were identified. In particular, this study creates a transparent methodology and estimates the potential annual...
Heikkilä, M.; Solaimani, H. (Sam); Kuivaniemi, L.; Suoranta, M.
2014-01-01
Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector ai
Stroke risk estimation across nine European countries in the MORGAM project
Borglykke, Anders; Andreasen, Anne H; Kuulasmaa, Kari
2010-01-01
Previous tools for stroke risk assessment have either been developed for specific populations or lack data on non-fatal events or uniform data collection. The purpose of this study was to develop a stepwise model for the estimation of 10 year risk of stroke in nine different countries across Europe....
Mishima, J.; McPherson, R.B.; Schwendiman, L.C.; Watson, E.C.; Ayer, J.E.
1979-02-01
Three scenarios representing significant levels of containment loss due to moderate, substantial, and major damage to the 102 Building at the Vallecitos Nuclear Center are postulated, and the potential radiation doses to the general population as a result of the airborne releases of radionuclides are estimated. The damage scenarios are not correlated to any specific level of seismic activity. The three scenarios are: (1) Moderate damage scenario--perforation of the enclosures in and the structure comprising the Plutonium Analytical Laboratory. (2) Substantial damage scenario--complete loss of containment of the Plutonium Analytical Laboratory and loss of the filters sealing the inlet to the Radioactive Materials Laboratory hot cells. (3) Major damage scenario--the damage outlined in (2) plus the perforation of enclosures holding significant inventories of dispersible plutonium in and the structure comprising the Advanced Fuels Laboratory.
Salling, Kim Bang; Leleur, Steen
For decades researchers have claimed that demand forecasts and construction costs estimations are assigned with large degrees of uncertainty, commonly referred to as Optimism Bias. A severe consequence is that ex-ante socio-economic evaluation of infrastructure projects becomes inaccurate and can...... lead to unsatisfactory investment decisions. Thus there is a need for better risk assessment and decision support, which is addressed by the recently developed UNITE-DSS model. It is argued that this simulation-based model can offer decision makers new and better ways to deal with risk assessment....
A number-projected model with generalized pairing interaction in application to rotating nuclei
Satula, W. [Warsaw Univ. (Poland)]|[Joint Institute for Heavy Ion Research, Oak Ridge, TN (United States)]|[Univ. of Tennessee, Knoxville, TN (United States)]|[Royal Institute of Technology, Stockholm (Sweden); Wyss, R. [Royal Institute of Technology, Stockholm (Sweden)
1996-12-31
A cranked mean-field model that takes into account both T=1 and T=0 pairing interactions is presented. The like-particle pairing interaction is described by means of a standard seniority force. The neutron-proton channel includes simultaneously correlations among particles moving in time reversed orbits (T=1) and identical orbits (T=0). The coupling between different pairing channels and nuclear rotation is taken into account selfconsistently. Approximate number-projection is included by means of the Lipkin-Nogami method. The transitions between different pairing phases are discussed as a function of neutron/proton excess, T{sub z}, and rotational frequency, {Dirac_h}{omega}.
Yue, Yang-Yang; Lu, Rong-er; Yang, Bo; Huang, Huang; Hong, Xu-Hao; Zhang, Chao; Qin, Yi-Qiang; Zhu, Yong-Yuan
2016-10-01
We take a theoretical investigation on the reciprocal property of a class of 2D nonlinear photonic quasicrystal proposed by Lifshitz et al. in PRL 95, 133901 (2005). Using the rectangular projection method, the analytical expression for the Fourier spectrum of the quasicrystal structure is obtained explicitly. It is interesting to find that the result has a similar form to the corresponding expression of the well-known 1D Fibonacci lattice. In addition, we predict a further extension of the result to higher dimensions. This work is of practical importance for the photonic device design in nonlinear optical conversion progresses.
Zhang, Hong; Kong, Vic [Department of Radiation Oncology, Georgia Regents University, Augusta, Georgia 30912 (United States); Ren, Lei; Giles, William; Zhang, You [Department of Radiation Oncology, Duke University, Durham, North Carolina 27710 (United States); Jin, Jian-Yue, E-mail: jjin@gru.edu [Department of Radiation Oncology, Georgia Regents University, Augusta, Georgia 30912 and Department of Radiology, Georgia Regents University, Augusta, Georgia 30912 (United States)
2016-01-15
Purpose: A preobject grid can reduce and correct scatter in cone beam computed tomography (CBCT). However, half of the signal in each projection is blocked by the grid. A synchronized moving grid (SMOG) has been proposed to acquire two complimentary projections at each gantry position and merge them into one complete projection. That approach, however, suffers from increased scanning time and the technical difficulty of accurately merging the two projections per gantry angle. Herein, the authors present a new SMOG approach which acquires a single projection per gantry angle, with complimentary grid patterns for any two adjacent projections, and use an interprojection sensor fusion (IPSF) technique to estimate the blocked signal in each projection. The method may have the additional benefit of reduced imaging dose due to the grid blocking half of the incident radiation. Methods: The IPSF considers multiple paired observations from two adjacent gantry angles as approximations of the blocked signal and uses a weighted least square regression of these observations to finally determine the blocked signal. The method was first tested with a simulated SMOG on a head phantom. The signal to noise ratio (SNR), which represents the difference of the recovered CBCT image to the original image without the SMOG, was used to evaluate the ability of the IPSF in recovering the missing signal. The IPSF approach was then tested using a Catphan phantom on a prototype SMOG assembly installed in a bench top CBCT system. Results: In the simulated SMOG experiment, the SNRs were increased from 15.1 and 12.7 dB to 35.6 and 28.9 dB comparing with a conventional interpolation method (inpainting method) for a projection and the reconstructed 3D image, respectively, suggesting that IPSF successfully recovered most of blocked signal. In the prototype SMOG experiment, the authors have successfully reconstructed a CBCT image using the IPSF-SMOG approach. The detailed geometric features in the
Prague’s Sewerage System in the 1930’ and the General Sewerage Project (1933–1936
K. Drnek
2010-01-01
Full Text Available Prague’s sewerage system was built at the end of the era of the monarchy in the united town that Prague was transformed into. The system was soon overloaded, and was not able to remove all the sewage produced by the citizens.To deal with this hygienic threat, the city council and the management of the wastewater services undertook several actions to build a new system or improve the existing system. The most ambitious and extensive measure was the general project carried out between 1933 and 1936.The project was invented to resolve the problem once and for all by introducing new ideas and cut out the problem of placing a new sewage plant instead of the old one. For the present-day observer it also offers a range of spectacular and interesting ideas on urban wastewater treatment.
Mishra-Kalyani, Pallavi S.; Johnson, Brent A.; Glass, Jonathan D.; Long, Qi
2016-09-01
Clinical disease registries offer a rich collection of valuable patient information but also pose challenges that require special care and attention in statistical analyses. The goal of this paper is to propose a statistical framework that allows for estimating the effect of surgical insertion of a percutaneous endogastrostomy (PEG) tube for patients living with amyotrophic lateral sclerosis (ALS) using data from a clinical registry. Although all ALS patients are informed about PEG, only some patients agree to the procedure which, leads to the potential for selection bias. Assessing the effect of PEG is further complicated by the aggressively fatal disease, such that time to death competes directly with both the opportunity to receive PEG and clinical outcome measurements. Our proposed methodology handles the “censoring by death” phenomenon through principal stratification and selection bias for PEG treatment through generalized propensity scores. We develop a fully Bayesian modeling approach to estimate the survivor average causal effect (SACE) of PEG on BMI, a surrogate outcome measure of nutrition and quality of life. The use of propensity score methods within the principal stratification framework demonstrates a significant and positive effect of PEG treatment, particularly when time of treatment is included in the treatment definition.
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
McGurk, Ross J; Smith, Valerie A; Bowsher, James; Lee, John A; Das, Shiva K
2013-06-07
This study aims to quantify how filter choice affects several fluoro-deoxy-glucose (FDG)-positron emission tomography (PET) segmentation methods and present the use of model fitting via generalized estimating equations (GEEs) to appropriately account for the properties of a common segmentation quality metric (Dice similarity coefficient). Spherical and irregularly shaped 'hot' objects filled with 18F-FDG were placed in a medium with background activity and imaged for 1, 2 and 5 min durations at low and high contrasts. Images were filtered with Gaussian and bilateral filters of 5 and 7 mm full-width half maximum (FWHM), with and without 3 mm FWHM Gaussian pre-smoothing. Four segmentation methods were used: 40% thresholding, adaptive thresholding, k-means clustering and seeded region-growing. Segmentation accuracy was quantified by overlap (using Dice similarity coefficient (DSC)) and distance between surfaces (using symmetric-mean-absolute-surface-distance (SMASD)) of the ground truth and segmented volumes. All segmentation methods showed mean DSC values between 0.71-0.87 and mean SMASD values between 0.72-2.10 mm across filters. The bilateral filter with 3 mm FWHM Gaussian pre-smoothing had mean DSC 0.80 ± 0.17 and mean SMASD 1.17 ± 1.51 mm displaying approximately equal performance to a 5 mm Gaussian filter with mean DSC 0.79 ± 0.18 and mean SMASD 1.27 ± 1.52 mm. Results from models fit using GEE with a binomial distribution and exchangeable correlation structure estimated the correlation between DSC values as 0.118 and 0.290 for spheres and irregular objects, respectively. The GEE approach accounts for several factors specific to the DSC metric that simpler statistical approaches do not, providing more accurate estimations of experimental effects commonly associated with nuclear medicine segmentation studies.
Bourgain, Pascaline
2015-04-01
Bridging Science and Society has now become a necessity for scientists to develop new partnerships with local communities and to raise the public interest for scientific activities. The French-Greenlandic educational project called "Angalasut" reflects this desire to create a bridge between science, local people and the general public. This program was set up on the 2012-2013 school year, as part of an international scientific program dedicated to study the interactions between the ocean and glaciers on the western coast of Greenland, in the Uummannaq fjord. Greenlandic and French school children were involved in educational activities, in classrooms and out on the field, associated with the scientific observations conducted in Greenland (glacier flow, ocean chemical composition and circulation, instrumentation...). In Greenland, the children had the opportunity to come on board the scientific sailing boat, and in France, several meetings were organized between the children and the scientists of the expedition. In the small village of Ikerasak, the children interviewed Elders about sea ice evolution in the area. These activities, coupled to the organization of public conferences and to the creation of a trilingual website of the project (French, Greenlandic, English) aimed at explaining why scientists come to study Greenland environment. This was the opportunity for scientists to discuss with villagers who could testify on their changing environment over the past decades. A first step toward a future collaboration between scientists and villagers that would deserve further development... The project Angalasut was also the opportunity for Greenlandic and French school children to exchange about their culture and their environment through Skype communications, the exchange of mails (drawings, shells...), the creation of a society game about European fauna and flora... A meeting in France between the two groups of children is considered, possibly in summer 2015
Chikara Nakahata; Ryo Uemura; Masashi Saito; Kanae Kanetsuki; Kazuhiro Aruga
2014-01-01
We used GIS on a regional scale to estimate and compare supply potentials and costs of small-scale logging systems, a mini-forwarder and a 4-ton truck operated by private logging contractors, and manual logging and a light truck operated by individual forest own-ers, with the mechanized operational system of the Forest Owners’ Asso-ciation. Total potential yields of timber and logging residues were esti-mated as 418,895 m3 and 254,962 m3, respectively. The economic bal-ances were estimated and available amounts were projected as supply potentials from profitable sub-compartments. As a result, available amounts of timber and logging residues were estimated at 376,466 m3 (89.9%) and 203,850 m3 (80.0%), respectively. Because their transport expenses were lower than for other systems the most profitable sub-compartments were operated by private logging contractors who sold logging residues at a plant. The profitable sub-compartments operated by individual forest owners were few because the extracting distances were usually greater than 20 m. Raising logging residue prices from 3,000 yen⋅m-3 to 4,080 yen⋅m-3 or 6,800 yen⋅m-3, and establishing forest roads, which reduced some extracting distances to less than 20 m, increased the number and area of profitable sub-compartments, and increased available amounts of logging residues.
Jackson, C; Jatulis, D E; Fortmann, S P
1992-01-01
BACKGROUND. Nearly all state health departments collect Behavioral Risk Factor Survey (BRFS) data, and many report using these data in public health planning. Although the BRFS is widely used, little is known about its measurement properties. This study compares the cardiovascular risk behavior estimates of the BRFS with estimates derived from the physiological and interview data of the Stanford Five-City Project Survey (FCPS). METHOD. The BRFS is a random telephone sample of 1588 adults aged 25 to 64; the FCPS is a random household sample of 1512 adults aged 25 to 64. Both samples were drawn from the same four California communities. RESULTS. The surveys produced comparable estimates for measures of current smoking, number of cigarettes smoked per day, rate of ever being told one has high blood pressure, rate of prescription of blood pressure medications, compliance in taking medications, and mean total cholesterol. Significant differences were found for mean body mass index, rates of obesity, and, in particular, rate of controlled hypertension. CONCLUSIONS. These differences indicate that, for some risk variables, the BRFS has limited utility in assessing public health needs and setting public health objectives. A formal validation study is needed to test all the risk behavior estimates measured by this widely used instrument. PMID:1536358
Estimation of 1945 to 1957 food consumption. Hanford Environmental Dose Reconstruction Project
Anderson, D.M.; Bates, D.J.; Marsh, T.L.
1993-07-01
This report details the methods used and the results of the study on the estimated historic levels of food consumption by individuals in the Hanford Environmental Dose Reconstruction (HEDR) study area from 1945--1957. This period includes the time of highest releases from Hanford and is the period for which data are being collected in the Hanford Thyroid Disease Study. These estimates provide the food-consumption inputs for the HEDR database of individual diets. This database will be an input file in the Hanford Environmental Dose Reconstruction Integrated Code (HEDRIC) computer model that will be used to calculate the radiation dose. The report focuses on fresh milk, eggs, lettuce, and spinach. These foods were chosen because they have been found to be significant contributors to radiation dose based on the Technical Steering Panel dose decision level.
Anderson, D.M.; Bates, D.J.; Marsh, T.L.
1993-03-01
This report details the methods used and the results of the study on the estimated historic levels of food consumption by individuals in the Hanford Environmental Dose Reconstruction (HEDR) study area from 1945--1957. This period includes the time of highest releases from Hanford and is the period for which data are being collected in the Hanford Thyroid Disease Study. These estimates provide the food-consumption inputs for the HEDR database of individual diets. This database will be an input file in the Hanford Environmental Dose Reconstruction Integrated Code (HEDRIC) computer model that will be used to calculate the radiation dose. The report focuses on fresh milk, eggs, lettuce, and spinach. These foods were chosen because they have been found to be significant contributors to radiation dose based on the Technical Steering Panel dose decision level.
See Project for Testing Gravity in Space Current Status and New Estimates
Alexeev, A D; Kolosnitsyn, N I; Konstantinov, M Yu; Melnikov, V N; Sanders, A J
1999-01-01
We describe some new estimates concerning the recently proposed SEE(Satellite Energy Exchange) experiment for measuring the gravitationalinteraction parameters in space. The experiment entails precision tracking ofthe relative motion of two test bodies (a heavy "Shepherd", and a light"Particle") on board a drag-free capsule. The new estimates include (i) thesensitivity of Particle trajectories and G measurement to the Shepherdquadrupole moment uncertainties; (ii) the measurement errors of G and thestrength of a putative Yukawa-type force whose range parameter $\\lambda$ may beeither of the order of a few metres or close to the Earth radius; (iii) apossible effect of the Van Allen radiation belts on the SEE experiment due totest body electric charging
Performance Estimation of Networked Business Models: Case Study on a Finnish eHealth Service Project
Marikka Heikkilä; Sam Solaimani; Aki Soudunsaari; Mila Hakanen; Leni Kuivaniemi; Mari Suoranta
2014-01-01
Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector aiming to jointly provide a new service for business and private customers. The duration of the research study is 3 years. Findings: We propose that a balanced set of performance indicators...
Estimation and Projection of Lung Cancer Incidence and Mortality in China
Xiaonong ZOU
2010-05-01
Full Text Available Background and objective The aim of this study is to analyze lung cancer epidemiological trend and estimate lung cancer burden in China. Methods Lung cancer age specific mortality and incidence rate ratios in different areas and sexes were obtained from national cancer registration database in 2004 and 2005. Cancer crude mortalities were retrieved from the database of the third national death survey, 2004-2005. Age specific incidence rates of lung cancer were calculated using mortality and M/I ratios. Annual percent change (APC was estimated by log regression model using Joint Point software by analyzing pooled lung cancer incidence data from 10 cancer registries from 1988 to 2005. Results The total estimated new cases and deaths of lung cancer in 2005 were 536 407 and 475 768 which were higher in male than in female. There was 1.63% increase of lung cancer incidence per year from 1988 to 2005, however, the trend showed a slowdown by 0.55% annually after adjusted by age. Conclusion Lung cancer is one of major health issues in China and the burden is getting serious. Ageing population is main cause for increasing incidence and mortality of lung cancer. Effective cancer prevention and control is imperative. Especially, tobacco control should be carried out in statewide.
Xiuchun Li
2013-01-01
Full Text Available When the parameters of both drive and response systems are all unknown, an adaptive sliding mode controller, strongly robust to exotic perturbations, is designed for realizing generalized function projective synchronization. Sliding mode surface is given and the controlled system is asymptotically stable on this surface with the passage of time. Based on the adaptation laws and Lyapunov stability theory, an adaptive sliding controller is designed to ensure the occurrence of the sliding motion. Finally, numerical simulations are presented to verify the effectiveness and robustness of the proposed method even when both drive and response systems are perturbed with external disturbances.
None
1976-04-01
The requirements for selecting commercial demonstrations are derived from the overall goal of the Federal program as stated in the ''National Program for Solar Heating and Cooling,'' ERDA 23-A, October 1975. This goal is to stimulate an industrial and commercial capability for producing and distributing solar heating and cooling (SHAC) systems. The development of the demonstration matrix consists of establishing selection criteria and developing a methodology for applying and evaluating these criteria. The output of this procedure results in a time phased matrix of location, SHAC systems, and building types which comprise the recommended National Solar Demonstration projects for commercial buildings. The Demonstration Matrix Definition is comprised of three principle elements: Demonstration identification; Specific Demonstration selection criteria; and Architect/Engineer (A/E) selection. (WDM)
Farnsworth, K. L.; House, M.; Hovan, S. A.
2013-12-01
A recent workshop sponsored by SERC-On the Cutting Edge brought together science educators from a range of schools across the country to discuss new approaches in teaching oceanography. In discussing student interest in our classes, we were struck by the fact that students are drawn to emotional or controversial topics such as whale hunting and tsunami hazard and that these kinds of topics are a great vehicle for introducing more complex concepts such as wave propagation, ocean upwelling and marine chemistry. Thus, we have developed an approach to introductory oceanography that presents students with real-world issues in the ocean sciences and requires them to explore the science behind them in order to improve overall ocean science literacy among non-majors and majors at 2 and 4 year colleges. We have designed a project-based curriculum built around topics that include, but are not limited to: tsunami hazard, whale migration, ocean fertilization, ocean territorial claims, rapid climate change, the pacific trash patch, overfishing, and ocean acidification. Each case study or project consists of three weeks of class time and is structured around three elements: 1) a media analysis; 2) the role of ocean science in addressing the issue; 3) human impact/response. Content resources range from textbook readings, popular or current print news, documentary film and television, and data available on the world wide web from a range of sources. We employ a variety of formative assessments for each case study in order to monitor student access and understanding of content and include a significant component of in-class student discussion and brainstorming guided by faculty input to develop the case study. Each study culminates in summative assessments ranging from exams to student posters to presentations, depending on the class size and environment. We envision this approach for a range of classroom environments including large group face-to-face instruction as well as hybrid
Palombo, B.; Ferrari, G.; Bernardi, F.; Hunstad, I.; Perniola, B.
2008-12-01
The 1908 earthquake is one of the most catastrophic events in Italian history, recorded by most of the historical seismic stations existing at that time. Some of the seismograms recorded by these stations have already been used by many authors for the purpose of studying source characteristics, although only copies of the original recordings were available. Thanks to the Euroseismos project (2002-2007) and to the Sismos project, most of the original data (seismogram recordings and instrument parameter calibrations) for these events are now available in digital formats. Sismos technical facilities now allow us to apply the modern methods of digital-data analysis for the earthquakes recorded by mechanical and electromagnetic seismographs. The Sismos database has recently acquired many original seismograms and related instrumental parameters for the 1908 Messina earthquake, recorded by 14 stations distributed worldwide and never before used in previous works. We have estimated the main event parameters (i.e. location, Ms, Mw and focal mechanism) with the new data set. The aim of our work is to provide the scientific community with a reliable size and source estimation for accurate and consistent seismic hazard evaluation in Sicily, a region characterized by high long-term seismicity.
Ayedh Alqahtani
2013-09-01
Full Text Available Industrial application of life-cycle cost analysis (LCCA is somewhat limited, with techniques deemed overly theoretical, resulting in a reluctance to realise (and pass onto the client the advantages to be gained from objective (LCCA comparison of (subcomponent material specifications. To address the need for a user-friendly structured approach to facilitate complex processing, the work described here develops a new, accessible framework for LCCA of construction projects; it acknowledges Artificial Neural Networks (ANNs to compute the whole-cost(s of construction and uses the concept of cost significant items (CSI to identify the main cost factors affecting the accuracy of estimation. ANNs is a powerful means to handle non-linear problems and subsequently map between complex input/output data, address uncertainties. A case study documenting 20 building projects was used to test the framework and estimate total running costs accurately. Two methods were used to develop a neural network model; firstly a back-propagation method was adopted (using MATLAB SOFTWARE; and secondly, spread-sheet optimisation was conducted (using Microsoft Excel Solver. The best network was established as consisting of 19 hidden nodes, with the tangent sigmoid used as a transfer function of NNs model for both methods. The results find that in both neural network models, the accuracy of the developed NNs model is 1% (via Excel-solver and 2% (via back-propagation respectively.
Asong, Z. E.; Khaliq, M. N.; Wheater, H. S.
2016-08-01
In this study, a multisite multivariate statistical downscaling approach based on the Generalized Linear Model (GLM) framework is developed to downscale daily observations of precipitation and minimum and maximum temperatures from 120 sites located across the Canadian Prairie Provinces: Alberta, Saskatchewan and Manitoba. First, large scale atmospheric covariates from the National Center for Environmental Prediction (NCEP) Reanalysis-I, teleconnection indices, geographical site attributes, and observed precipitation and temperature records are used to calibrate GLMs for the 1971-2000 period. Then the calibrated models are used to generate daily sequences of precipitation and temperature for the 1962-2005 historical (conditioned on NCEP predictors), and future period (2006-2100) using outputs from five CMIP5 (Coupled Model Intercomparison Project Phase-5) Earth System Models corresponding to Representative Concentration Pathway (RCP): RCP2.6, RCP4.5, and RCP8.5 scenarios. The results indicate that the fitted GLMs are able to capture spatiotemporal characteristics of observed precipitation and temperature fields. According to the downscaled future climate, mean precipitation is projected to increase in summer and decrease in winter while minimum temperature is expected to warm faster than the maximum temperature. Climate extremes are projected to intensify with increased radiative forcing.
The BSM-AI project: SUSY-AI-generalizing LHC limits on supersymmetry with machine learning
Caron, Sascha [Radboud Universiteit, Institute for Mathematics, Astro- and Particle Physics IMAPP, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Kim, Jong Soo [UAM/CSIC, Instituto de Fisica Teorica, Madrid (Spain); Rolbiecki, Krzysztof [UAM/CSIC, Instituto de Fisica Teorica, Madrid (Spain); University of Warsaw, Faculty of Physics, Warsaw (Poland); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Stienen, Bob [Radboud Universiteit, Institute for Mathematics, Astro- and Particle Physics IMAPP, Nijmegen (Netherlands)
2017-04-15
A key research question at the Large Hadron Collider is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: it requires time consuming generation of scattering events, simulation of the detector response, event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiments. In the BSM-AI project we approach this challenge with a new idea. A machine learning tool is devised to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300, 000 pMSSM model sets - each tested against 200 signal regions by ATLAS - have been used to train and validate SUSY-AI. The code is currently able to reproduce the ATLAS exclusion regions in 19 dimensions with an accuracy of at least 93%. It has been validated further within the constrained MSSM and the minimal natural supersymmetric model, again showing high accuracy. SUSY-AI and its future BSM derivatives will help to solve the problem of recasting LHC results for any model of new physics. SUSY-AI can be downloaded from http://susyai.hepforge.org/. An on-line interface to the program for quick testing purposes can be found at http://www.susy-ai.org/. (orig.)
The BSM-AI project: SUSY-AI-generalizing LHC limits on supersymmetry with machine learning
Caron, Sascha; Kim, Jong Soo; Rolbiecki, Krzysztof; de Austri, Roberto Ruiz; Stienen, Bob
2017-04-01
A key research question at the Large Hadron Collider is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: it requires time consuming generation of scattering events, simulation of the detector response, event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiments. In the BSM-AI project we approach this challenge with a new idea. A machine learning tool is devised to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300, 000 pMSSM model sets - each tested against 200 signal regions by ATLAS - have been used to train and validate SUSY-AI. The code is currently able to reproduce the ATLAS exclusion regions in 19 dimensions with an accuracy of at least 93%. It has been validated further within the constrained MSSM and the minimal natural supersymmetric model, again showing high accuracy. SUSY-AI and its future BSM derivatives will help to solve the problem of recasting LHC results for any model of new physics. SUSY-AI can be downloaded from http://susyai.hepforge.org/. An on-line interface to the program for quick testing purposes can be found at http://www.susy-ai.org/.
The BSM-AI project: SUSY-AI - Generalizing LHC limits on Supersymmetry with Machine Learning
Caron, Sascha; Rolbiecki, Krzysztof; de Austri, Roberto Ruiz; Stienen, Bob
2016-01-01
A key research question at the Large Hadron Collider (LHC) is the test of models of new physics. Testing if a particular parameter set of such a model is excluded by LHC data is a challenge: It requires the time consuming generation of scattering events, the simulation of the detector response, the event reconstruction, cross section calculations and analysis code to test against several hundred signal regions defined by the ATLAS and CMS experiment. In the BSM-AI project we attack this challenge with a new approach. Machine learning tools are thought to predict within a fraction of a millisecond if a model is excluded or not directly from the model parameters. A first example is SUSY-AI, trained on the phenomenological supersymmetric standard model (pMSSM). About 300,000 pMSSM model sets - each tested with 200 signal regions by ATLAS - have been used to train and validate SUSY-AI. The code is currently able to reproduce the ATLAS exclusion regions in 19 dimensions with an accuracy of at least 93 percent. It ...
ZHENG Yong-Ai
2012-01-01
Time-delay Takagi-Sugeno fuzzy drive-response dynamical networks (TD-TSFDRDNs) are defined by extending the drive-response dynamical networks. Based on the LaSalle invariant principle, a simple and systematic adaptive control scheme is proposed to synchronize the TD-TSFDRDNs with a desired scalar factor. A sufficient condition for the generalized projective synchronization in TD-TSFDRDNs is derived. Moreover, numerical simulations are provided to verify the correctness and effectiveness of the scheme.%Time-delay Takagi-Sugeno fuzzy drive-response dynamical networks (TD-TSFDRDNs) are defined by extending the drive-response dynamical networks.Based on the LaSalle invariant principle,a simple and systematic adaptive control scheme is proposed to synchronize the TD-TSFDRDNs with a desired scalar factor.A sufficient condition for the generalized projective synchronization in TD-TSFDRDNs is derived.Moreover,numerical simulations are provided to verify the correctness and effectiveness of the scheme.
F. Prandi
2016-06-01
Full Text Available Forests represent an important economic resource for mountainous areas being for a few region and mountain communities the main form of income. However, wood chain management in these contexts differs from the traditional schemes due to the limits imposed by terrain morphology, both for the operation planning aspects and the hardware requirements. In fact, forest organizational and technical problems require a wider strategic and detailed level of planning to reach the level of productivity of forest operation techniques applied on flatlands. In particular, a perfect knowledge of forest inventories improves long-term management sustainability and efficiency allowing a better understanding of forest ecosystems. However, this knowledge is usually based on historical parcel information with only few cases of remote sensing information from satellite imageries. This is not enough to fully exploit the benefit of the mountain areas forest stocks where the economic and ecological value of each single parcel depends on singletree characteristics. The work presented in this paper, based on the results of the SLOPE (Integrated proceSsing and controL systems fOr sustainable forest Production in mountain arEas project, investigates the capability to generate, manage and visualize detailed virtual forest models using geospatial information, combining data acquired from traditional on-the-field laser scanning surveys technologies with new aerial survey through UAV systems. These models are then combined with interactive 3D virtual globes for continuous assessment of resource characteristics, harvesting planning and real-time monitoring of the whole production.
Prandi, F.; Magliocchetti, D.; Poveda, A.; De Amicis, R.; Andreolli, M.; Devigili, F.
2016-06-01
Forests represent an important economic resource for mountainous areas being for a few region and mountain communities the main form of income. However, wood chain management in these contexts differs from the traditional schemes due to the limits imposed by terrain morphology, both for the operation planning aspects and the hardware requirements. In fact, forest organizational and technical problems require a wider strategic and detailed level of planning to reach the level of productivity of forest operation techniques applied on flatlands. In particular, a perfect knowledge of forest inventories improves long-term management sustainability and efficiency allowing a better understanding of forest ecosystems. However, this knowledge is usually based on historical parcel information with only few cases of remote sensing information from satellite imageries. This is not enough to fully exploit the benefit of the mountain areas forest stocks where the economic and ecological value of each single parcel depends on singletree characteristics. The work presented in this paper, based on the results of the SLOPE (Integrated proceSsing and controL systems fOr sustainable forest Production in mountain arEas) project, investigates the capability to generate, manage and visualize detailed virtual forest models using geospatial information, combining data acquired from traditional on-the-field laser scanning surveys technologies with new aerial survey through UAV systems. These models are then combined with interactive 3D virtual globes for continuous assessment of resource characteristics, harvesting planning and real-time monitoring of the whole production.
Cook, T. M; Andrade, J; Bogod, D. G; Hitchman, J. M; Jonker, W. R; Lucas, N; Mackay, J. H; Nimmo, A. F; O'Connor, K; O'Sullivan, E. P; Paul, R. G; Palmer, J. H. MacG; Plaat, F; Radcliffe, J. J; Sury, M. R. J; Torevell, H. E; Wang, M; Hainsworth, J; Pandit, J. J
2014-01-01
The 5th National Audit Project of the Royal College of Anaesthetists and the Association of Anaesthetists of Great Britain and Ireland into accidental awareness during general anaesthesia yielded data...
Amlan Kumar Patra
2014-04-01
Full Text Available This study presents trends and projected estimates of methane and nitrous oxide emissions from livestock of India vis-à-vis world and developing countries over the period 1961 to 2010 estimated based on IPCC guidelines. World enteric methane emission (EME increased by 54.3% (61.5 to 94.9 ×109 kg annually from the year 1961 to 2010, and the highest annual growth rate (AGR was noted for goat (2.0%, followed by buffalo (1.57% and swine (1.53%. Global EME is projected to increase to 120×109 kg by 2050. The percentage increase in EME by Indian livestock was greater than world livestock (70.6% vs 54.3% between the years 1961 to 2010, and AGR was highest for goat (1.91%, followed by buffalo (1.55%, swine (1.28%, sheep (1.25% and cattle (0.70%. In India, total EME was projected to grow by 18.8×109 kg in 2050. Global methane emission from manure (MEM increased from 6.81 ×109 kg in 1961 to 11.4×109 kg in 2010 (an increase of 67.6%, and is projected to grow to 15×109 kg by 2050. In India, the annual MEM increased from 0.52×109 kg to 1.1×109 kg (with an AGR of 1.57% in this period, which could increase to 1.54×109 kg in 2050. Nitrous oxide emission from manure in India could be 21.4×106 kg in 2050 from 15.3×106 kg in 2010. The AGR of global GHG emissions changed a small extent (only 0.11% from developed countries, but increased drastically (1.23% for developing countries between the periods of 1961 to 2010. Major contributions to world GHG came from cattle (79.3%, swine (9.57% and sheep (7.40%, and for developing countries from cattle (68.3%, buffalo (13.7% and goat (5.4%. The increase of GHG emissions by Indian livestock was less (74% vs 82% over the period of 1961 to 2010 than the developing countries. With this trend, world GHG emissions could reach 3,520×109 kg CO2-eq by 2050 due to animal population growth driven by increased demands for meat and dairy products in the world.
Patra, Amlan Kumar
2014-04-01
This study presents trends and projected estimates of methane and nitrous oxide emissions from livestock of India vis-à-vis world and developing countries over the period 1961 to 2010 estimated based on IPCC guidelines. World enteric methane emission (EME) increased by 54.3% (61.5 to 94.9 ×10(9) kg annually) from the year 1961 to 2010, and the highest annual growth rate (AGR) was noted for goat (2.0%), followed by buffalo (1.57%) and swine (1.53%). Global EME is projected to increase to 120×10(9) kg by 2050. The percentage increase in EME by Indian livestock was greater than world livestock (70.6% vs 54.3%) between the years 1961 to 2010, and AGR was highest for goat (1.91%), followed by buffalo (1.55%), swine (1.28%), sheep (1.25%) and cattle (0.70%). In India, total EME was projected to grow by 18.8×10(9) kg in 2050. Global methane emission from manure (MEM) increased from 6.81 ×10(9) kg in 1961 to 11.4×10(9) kg in 2010 (an increase of 67.6%), and is projected to grow to 15×10(9) kg by 2050. In India, the annual MEM increased from 0.52×10(9) kg to 1.1×10(9) kg (with an AGR of 1.57%) in this period, which could increase to 1.54×10(9) kg in 2050. Nitrous oxide emission from manure in India could be 21.4×10(6) kg in 2050 from 15.3×10(6) kg in 2010. The AGR of global GHG emissions changed a small extent (only 0.11%) from developed countries, but increased drastically (1.23%) for developing countries between the periods of 1961 to 2010. Major contributions to world GHG came from cattle (79.3%), swine (9.57%) and sheep (7.40%), and for developing countries from cattle (68.3%), buffalo (13.7%) and goat (5.4%). The increase of GHG emissions by Indian livestock was less (74% vs 82% over the period of 1961 to 2010) than the developing countries. With this trend, world GHG emissions could reach 3,520×10(9) kg CO2-eq by 2050 due to animal population growth driven by increased demands for meat and dairy products in the world.
Riva, Giuseppe; Gorini, Alessandra; Gaggioli, Andrea
2009-01-01
Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of biofeedback enhanced virtual reality (VR) to facilitate the relaxation process. The VR relaxation experience will be strengthened by the use of a mobile phone able to track and visualize, in an outpatient setting too, the physiological data of the patients. To test this concept, we planned a randomized controlled trial (NCT00602212), including three groups of 15 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.
Estimation of food consumption
Callaway, J.M. Jr.
1992-04-01
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Petitta, Marcello; Wagner, Jochen; Costa, Armin; Monsorno, Roberto; Innerebner, Markus; Moser, David; Zebisch, Marc
2014-05-01
The scientific community in the last years is largely discussing the concept of "Climate services". Several definitions have been used, but it still remains a rather open concept. We used climate data from analysis and reanalysis to create a daily and hourly model of atmospheric turbidity in order to account the effect of the atmosphere on incoming solar radiation with the final aim of estimating electric production from Photovoltaic (PV) Modules in the Alps. Renewable Energy production in the Alpine Region is dominated by hydroelectricity, but the potential for photovoltaic energy production is gaining momentum. Especially the southern part of the Alps and inner Alpine regions offer good conditions for PV energy production. The combination of high irradiance values and cold air temperature in mountainous regions is well suited for solar cells. To enable more widespread currency of PV plants, PV has to become an important part in regional planning. To provide regional authorities and also private stakeholders with high quality PV energy yield climatology in the provinces of Bolzano/Bozen South Tirol (Italy) and Tyrol (Austria), the research project Solar Tyrol was inaugurated in 2012. Several methods are used to calculate very high resolution maps of solar radiation. Most of these approaches use climatological values. In this project we reconstructed the last 10 years of atmospheric turbidity using reanalysis and operational data in order to better estimate incoming solar radiation in the alpine region. Our method is divided into three steps: i) clear sky radiation: to estimate the atmospheric effect on solar radiation we calculated Linke Turbidity factor using aerosols optical depth (AOD), surface albedo, atmospheric pressure, and total water content from ECMWF and MACC analysis. ii) shadows: we calculated shadows of mountains and buildings using a 2 meter-resolution digital elevation model of the area and GIS module r.sun modified to fit our specific needs. iii
McPherson, Sterling; Barbosa-Leiker, Celestina; McDonell, Michael; Howell, Donelle; Roll, John
2013-01-01
Objective A review of substance use clinical trials indicates that sub-optimal methods are the most commonly used procedures to deal with longitudinal missing information. Methods Listwise deletion (i.e., using complete cases only), positive urine analysis (UA) imputation, and multiple imputation (MI) were used to evaluate the effect of baseline substance use and buprenorphine/naloxone tapering schedule (7 or 28 days) on the probability of a positive UA (UA+) across the 4-week treatment period. Results The listwise deletion generalized estimating equations (GEE) model demonstrated that those in the 28-day taper group were less likely to submit a UA+ for opioids during the treatment period (odds ratios (OR) = 0.57, 95% confidence interval (CI): 0.39–0.83), as did the positive UA imputation model (OR = 0.43, CI: 0.34–0.55). The MI model also demonstrated a similar effect of taper group (OR = 0.57, CI: 0.42–0.77), but the effect size was more similar to that of the listwise deletion model. Conclusions Future researchers may find utilization of the MI procedure in conjunction with the common method of GEE analysis as a helpful analytic approach when the missing at random assumption is justifiable. PMID:24014144
Smith, Cameron M.
2014-04-01
Designing interstellar starships for human migration to exoplanets requires establishing the starship population, which factors into many variables including closed-ecosystem design, architecture, mass and propulsion. I review the central issues of population genetics (effects of mutation, migration, selection and drift) for human populations on such voyages, specifically referencing a roughly 5-generation (c. 150-year) voyage currently in the realm of thought among Icarus Interstellar's Project Hyperion research group. I present several formulae as well as concrete numbers that can be used to help determine populations that could survive such journeys in good health. I find that previously proposed such populations, on the order of a few hundred individuals, are significantly too low to consider based on current understanding of vertebrate (including human) genetics and population dynamics. Population genetics theory, calculations and computer modeling determine that a properly screened and age- and sex-structured total founding population (Nc) of anywhere from roughly 14,000 to 44,000 people would be sufficient to survive such journeys in good health. A safe and well-considered Nc figure is 40,000, an Interstellar Migrant Population (IMP) composed of an Effective Population [Ne] of 23,400 reproductive males and females, the rest being pre- or post-reproductive individuals. This number would maintain good health over five generations despite (a) increased inbreeding resulting from a relatively small human population, (b) depressed genetic diversity due to the founder effect, (c) demographic change through time and (d) expectation of at least one severe population catastrophe over the 5-generation voyage.
Performance Estimation of Networked Business Models: Case Study on a Finnish eHealth Service Project
Marikka Heikkilä
2014-08-01
Full Text Available Purpose: The objective of this paper is to propose and demonstrate a framework for estimating performance in a networked business model. Design/methodology/approach: Our approach is design science, utilising action research in studying a case of four independent firms in Health & Wellbeing sector aiming to jointly provide a new service for business and private customers. The duration of the research study is 3 years. Findings: We propose that a balanced set of performance indicators can be defined by paying attention to all main components of the business model, enriched with of network collaboration. The results highlight the importance of measuring all main components of the business model and also the business network partners’ view on trust, contracts and fairness. Research implications: This article contributes to the business model literature by combining business modelling with performance evaluation. The article points out that it is essential to create metrics that can be applied to evaluate and improve the business model blueprints, but it is also important to measure business collaboration aspects. Practical implications: Companies have already adopted Business model canvas or similar business model tools to innovate new business models. We suggest that companies continue their business model innovation work by agreeing on a set of performance metrics, building on the business model components model enriched with measures of network collaboration. Originality/value: This article contributes to the business model literature and praxis by combining business modelling with performance evaluation.
Julie eCarcaud
2016-01-01
Full Text Available The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT. To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning. Lesioned and intact bees had to learn to associate an odorant (1-nonanol with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance.
Carcaud, Julie; Giurfa, Martin; Sandoz, Jean Christophe
2015-01-01
The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection) neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT). To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning). Lesioned and intact bees had to learn to associate an odorant (1-nonanol) with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance.
Preliminary analysis of construction investment estimation on petrochemical project%浅析石油化工项目建设投资估算
陈瑜芳
2011-01-01
石油化工项目的建设投资估算是项目前期工作的重要内容之一.在分析石油化工项目的特点、说明投资估算的重要性和范围的基础上,提出投资估算编制应注重资料搜集、制定投资估算原则、编制方法及确定工程量等环节.并分析说明了如何根据项目的具体情况,选取适当的估算方法完成项目投资估算.最终提出了投资估算中应注意的问题.%The construction investment estimation of petrochemical project is one of important work activities in the early stage of project. On the basis of analysis of the characteristics of petroleum refining and petrochemical engineering projects and the importance of investment estimation and its scope, the investment estimate preparation should focus on such steps as data collection, development of investment estimation method and principles and determination of bill of quantities, etc. An example is provided to illustrate how to select a proper estimation method to perform the investment estimate according to specific project conditions. The precautions in capital investment estimation are finally presented.
Mededovic Thagard, Selma; Stratton, Gunnar R.; Dai, Fei; Bellona, Christopher L.; Holsen, Thomas M.; Bohl, Douglas G.; Paek, Eunsu; Dickenson, Eric R. V.
2017-01-01
contributions from the three general mechanisms, it was determined that surface concentration is the dominant factor determining a compound’s treatability. These insights indicate that PWT would be most viable for the treatment of surfactant-like contaminants. , which features invited work from the best early-career researchers working within the scope of J. Phys. D. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Selma Mededovic Thagard was selected by the Editorial Board of J. Phys. D as an Leader.
L.L. Hrytsenko
2015-03-01
Full Text Available The aim of the article. The aim of the article is improvement of scientific and methodical approach to complex estimation of innovative projects risks through public-private partnership. The results of the analysis. Different models of state and business cooperation that are implemented through the public-private partnership are identified as an effective instrument of such special group of innovative projects as infrastructure projects. During the realization of public-private partnership such special features of infrastructure innovative projects as long-term and contractual character of relationships play one of the most important role. It becomes possible to implement using special forms of public-private partnership and financing models, realization of partner relation on competitive base, diversification of responsibility and risk between participants of public-private partnership. Risk of public-private partnership is recommended to identify as an economic category that characterizes probabilistic assessment of objective threat of losing by public-private partnership participants their material, financial, intellectual resources, subjectively expected revenue shortfall, creation of additional expenditures or any other deviations from the predicted parameters of efficiency. It can happen in a result of making decision about public and private sector cooperation during realization and in the result of changes in investment policy, innovative infrastructure projects innovation and investment environments. Realization of innovative projects in public-private partnership is connected with significant risks that occur at each stage of its realization and preparation. The long-term investments of innovative infrastructure projects, its high value and big quantity of participants, complex system of its financing and organizational structure cause that very urgent task in such projects realization is necessity of taking into account its investment
Bhatnagar, Tarun; Dutta, Tapati; Stover, John; Godbole, Sheela; Sahu, Damodar; Boopathi, Kangusamy; Bembalkar, Shilpa; Singh, Kh. Jitenkumar; Goyal, Rajat; Pandey, Arvind; Mehendale, Sanjay M.
2016-01-01
Models are designed to provide evidence for strategic program planning by examining the impact of different interventions on projected HIV incidence. We employed the Goals Model to fit the HIV epidemic curves in Andhra Pradesh, Maharashtra and Tamil Nadu states of India where HIV epidemic is considered to have matured and in a declining phase. Input data in the Goals Model consisted of demographic, epidemiological, transmission-related and risk group wise behavioral parameters. The HIV prevalence curves generated in the Goals Model for each risk group in the three states were compared with the epidemic curves generated by the Estimation and Projection Package (EPP) that the national program is routinely using. In all the three states, the HIV prevalence trends for high-risk populations simulated by the Goals Model matched well with those derived using state-level HIV surveillance data in the EPP. However, trends for the low- and medium-risk populations differed between the two models. This highlights the need to generate more representative and robust data in these sub-populations and consider some structural changes in the modeling equation and parameters in the Goals Model to effectively use it to assess the impact of future strategies of HIV control in various sub-populations in India at the sub-national level. PMID:27711212
Vichit-Vadakan, Nuntavarn; Vajanapoom, Nitaya; Ostro, Bart
2008-09-01
Air pollution data in Bangkok, Thailand, indicate that levels of particulate matter with aerodynamic diameter air pollution in Bangkok, Thailand. The study period extended from 1999 to 2003, for which the Ministry of Public Health provided the mortality data. Measures of air pollution were derived from air monitoring stations, and information on temperature and relative humidity was obtained from the weather station in central Bangkok. The statistical analysis followed the common protocol for the multicity PAPA (Public Health and Air Pollution Project in Asia) project in using a natural cubic spline model with smooths of time and weather. The excess risk for non-accidental mortality was 1.3% [95% confidence interval (CI), 0.8-1.7] per 10 microg/m(3) of PM(10), with higher excess risks for cardiovascular and above age 65 mortality of 1.9% (95% CI, 0.8-3.0) and 1.5% (95% CI, 0.9-2.1), respectively. In addition, the effects from PM(10) appear to be consistent in multipollutant models. The results suggest strong associations between several different mortality outcomes and PM(10). In many cases, the effect estimates were higher than those typically reported in Western industrialized nations.
Majumdar, A. K.; Hedayat, A.
2015-01-01
This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects.
D. N. Bird
2008-04-01
Full Text Available Some climate scientists are questioning whether the practice of converting of non-forest lands to forest land (afforestation or reforestation is an effective climate change mitigation option. The discussion focuses particularly on areas where the new forest is primarily coniferous and there is significant amount of snow since the increased climate forcing due to the change in albedo may counteract the decreased climate forcing due to carbon dioxide removal.
In this paper, we develop a stand-based model that combines changes in surface albedo, solar radiation, latitude, cloud cover and carbon sequestration. As well, we develop a procedure to convert carbon stock changes to equivalent climatic forcing or climatic forcing to equivalent carbon stock changes. Using the model, we investigate the sensitivity of combined affects of changes in surface albedo and carbon stock changes to model parameters. The model is sensitive to amount of cloud, atmospheric absorption, timing of canopy closure, carbon sequestration rate among other factors. The sensitivity of the model is investigated at one Canadian site, and then the model is tested at numerous sites across Canada.
In general, we find that the change in albedo reduces the carbon sequestration benefits by approximately 30% over 100 years, but this is not drastic enough to suggest that one should not use afforestation or reforestation as a climate change mitigation option. This occurs because the forests grow in places where there is significant amount of cloud in winter. As well, variations in sequestration rate seem to be counterbalanced by the amount and timing of canopy closure.
We close by speculating that the effects of albedo may also be significant in locations at lower latitudes, where there are less clouds, and where there are extended dry seasons. These conditions make grasses light coloured and when irrigated crops, dark forests or other vegetation such as
Pandit, J J; Andrade, J; Bogod, D G; Hitchman, J M; Jonker, W R; Lucas, N; Mackay, J H; Nimmo, A F; O'Connor, K; O'Sullivan, E P; Paul, R G; Palmer, J H MacG; Plaat, F; Radcliffe, J J; Sury, M R J; Torevell, H E; Wang, M; Hainsworth, J; Cook, T M
2014-10-01
We present the main findings of the 5th National Audit Project on accidental awareness during general anaesthesia. Incidences were estimated using reports of accidental awareness as the numerator, and a parallel national anaesthetic activity survey to provide denominator data. The incidence of certain/probable and possible accidental awareness cases was ~1:19 600 anaesthetics (95% CI 1:16 700-23 450). However, there was considerable variation across subtypes of techniques or subspecialties. The incidence with neuromuscular blockade was ~1:8200 (1:7030-9700), and without it was ~1:135 900 (1:78 600-299 000). The cases of accidental awareness during general anaesthesia reported to 5th National Audit Project were overwhelmingly cases of unintended awareness during neuromuscular blockade. The incidence of accidental awareness during caesarean section was ~1:670 (1:380-1300). Two thirds (82, 66%) of cases of accidental awareness experiences arose in the dynamic phases of anaesthesia, namely induction of and emergence from anaesthesia. During induction of anaesthesia, contributory factors included: use of thiopental; rapid sequence induction; obesity; difficult airway management; neuromuscular blockade; and interruptions of anaesthetic delivery during movement from anaesthetic room to theatre. During emergence from anaesthesia, residual paralysis was perceived by patients as accidental awareness, and commonly related to a failure to ensure full return of motor capacity. One third (43, 33%) of accidental awareness events arose during the maintenance phase of anaesthesia, most due to problems at induction or towards the end of anaesthesia. Factors increasing the risk of accidental awareness included: female sex; age (younger adults, but not children); obesity; anaesthetist seniority (junior trainees); previous awareness; out-of-hours operating; emergencies; type of surgery (obstetric, cardiac, thoracic); and use of neuromuscular blockade. The following factors were
Verhulst, Kristal R.; Karion, Anna; Kim, Jooil; Salameh, Peter K.; Keeling, Ralph F.; Newman, Sally; Miller, John; Sloop, Christopher; Pongetti, Thomas; Rao, Preeti; Wong, Clare; Hopkins, Francesca M.; Yadav, Vineet; Weiss, Ray F.; Duren, Riley M.; Miller, Charles E.
2017-07-01
We report continuous surface observations of carbon dioxide (CO2) and methane (CH4) from the Los Angeles (LA) Megacity Carbon Project during 2015. We devised a calibration strategy, methods for selection of background air masses, calculation of urban enhancements, and a detailed algorithm for estimating uncertainties in urban-scale CO2 and CH4 measurements. These methods are essential for understanding carbon fluxes from the LA megacity and other complex urban environments globally. We estimate background mole fractions entering LA using observations from four extra-urban sites including two marine sites located south of LA in La Jolla (LJO) and offshore on San Clemente Island (SCI), one continental site located in Victorville (VIC), in the high desert northeast of LA, and one continental/mid-troposphere site located on Mount Wilson (MWO) in the San Gabriel Mountains. We find that a local marine background can be established to within ˜ 1 ppm CO2 and ˜ 10 ppb CH4 using these local measurement sites. Overall, atmospheric carbon dioxide and methane levels are highly variable across Los Angeles. Urban and suburban sites show moderate to large CO2 and CH4 enhancements relative to a marine background estimate. The USC (University of Southern California) site near downtown LA exhibits median hourly enhancements of ˜ 20 ppm CO2 and ˜ 150 ppb CH4 during 2015 as well as ˜ 15 ppm CO2 and ˜ 80 ppb CH4 during mid-afternoon hours (12:00-16:00 LT, local time), which is the typical period of focus for flux inversions. The estimated measurement uncertainty is typically better than 0.1 ppm CO2 and 1 ppb CH4 based on the repeated standard gas measurements from the LA sites during the last 2 years, similar to Andrews et al. (2014). The largest component of the measurement uncertainty is due to the single-point calibration method; however, the uncertainty in the background mole fraction is much larger than the measurement uncertainty. The background uncertainty for the marine
Yesilova, Abdullah; Yilmaz, Ayhan
In this study, the Poison regression, negative binomial regression and generalized estimating equations were applied to the repeated measurements based on count data obtained from the sexual behaviors of ram lambs. Negative binomial regression was more effective to handle the over dispersion that causes bias in parameter estimations in Poison regression. The generalized estimating equations were used for analyzing repeated categorical data. GEE estimates were obtained by using the exchangeable working correlation. As a result of GEE analyses, it was concluded that flehmen lip curl response, tail raising, mount duration, vocalization and weight of the ram lamb were statistically important (p<0.05) for mount frequent. However, the anogenital sniff found be not significant.
Dominik Fischer
2011-11-01
Full Text Available BACKGROUND: In the Old World, sandfly species of the genus Phlebotomus are known vectors of Leishmania, Bartonella and several viruses. Recent sandfly catches and autochthonous cases of leishmaniasis hint on spreading tendencies of the vectors towards Central Europe. However, studies addressing potential future distribution of sandflies in the light of a changing European climate are missing. METHODOLOGY: Here, we modelled bioclimatic envelopes using MaxEnt for five species with proven or assumed vector competence for Leishmania infantum, which are either predominantly located in (south- western (Phlebotomus ariasi, P. mascittii and P. perniciosus or south-eastern Europe (P. neglectus and P. perfiliewi. The determined bioclimatic envelopes were transferred to two climate change scenarios (A1B and B1 for Central Europe (Austria, Germany and Switzerland using data of the regional climate model COSMO-CLM. We detected the most likely way of natural dispersal ("least-cost path" for each species and hence determined the accessibility of potential future climatically suitable habitats by integrating landscape features, projected changes in climatic suitability and wind speed. RESULTS AND RELEVANCE: Results indicate that the Central European climate will become increasingly suitable especially for those vector species with a current south-western focus of distribution. In general, the highest suitability of Central Europe is projected for all species in the second half of the 21st century, except for P. perfiliewi. Nevertheless, we show that sandflies will hardly be able to occupy their climatically suitable habitats entirely, due to their limited natural dispersal ability. A northward spread of species with south-eastern focus of distribution may be constrained but not completely avoided by the Alps. Our results can be used to install specific monitoring systems to the projected risk zones of potential sandfly establishment. This is urgently needed for
Quilcaille, Yann; Gasser, Thomas; Ciais, Philippe; Lecocq, Franck; Janssens-Maenhout, Greet; Mohr, Steve; Andres, Robert J.; Bopp, Laurent
2016-04-01
There are different methodologies to estimate CO2 emissions from fossil fuel combustion. The term "methodology" refers to the way subtypes of fossil fuels are aggregated and their implied emissions factors. This study investigates how the choice of a methodology impacts historical and future CO2 emissions, and ensuing climate change projections. First, we use fossil fuel extraction data from the Geologic Resources Supply-Demand model of Mohr et al. (2015). We compare four different methodologies to transform amounts of fossil fuel extracted into CO2 emissions based on the methodologies used by Mohr et al. (2015), CDIAC, EDGARv4.3, and IPCC 1996. We thus obtain 4 emissions pathways, for the historical period 1750-2012, that we compare to the emissions timeseries from EDGARv4.3 (1970-2012) and CDIACv2015 (1751-2011). Using the 3 scenarios by Mohr et al. (2015) for projections till 2300 under the assumption of an Early (Low emission), Best Guess or Late (High emission) extraction peaking, we obtain 12 different pathways of CO2 emissions over 1750-2300. Second, we extend these CO2-only pathways to all co-emitted and climatically active species. Co-emission ratios for CH4, CO, BC, OC, SO2, VOC, N2O, NH3, NOx are calculated on the basis of the EDGAR v4.3 dataset, and are then used to produce complementary pathways of non-CO2 emissions from fossil fuel combustion only. Finally, the 12 emissions scenarios are integrated using the compact Earth system model OSCAR v2.2, in order to quantify the impact of the selected driver onto climate change projections. We find historical cumulative fossil fuel CO2 emissions from 1750 to 2012 ranging from 365 GtC to 392 GtC depending upon the methodology used to convert fossil fuel into CO2 emissions. We notice a drastic increase of the impact of the methodology in the projections. For the High emission scenario with Late fuel extraction peaking, cumulated CO2 emissions from 1700 to 2100 range from 1505 GtC to 1685 GtC; this corresponds
Gabriel A. Pinilla Agudelo
2013-09-01
Full Text Available ABSTRACT A methodological proposal for estimating environmental flows in large projects approved by Agencia Nacional de Licencias Ambientales (ANLA in Colombian rivers was developed. The project is the result of an agreement between the MADS and the Universidad Nacional de Colombia, Bogotá (UNC. The proposed method begins with an evaluation of hydrological criteria,continues with a hydraulic and water quality validation, and follows with the determination of habitat integrity. This is an iterative process that compares conditions before and after the project construction and allows to obtain the magnitude of a monthly flow that, besides preserving the ecological functions of the river, guarantees the water uses downstream. Regarding to the biotic component, the proposal includes the establishment and monitoring of biotic integrity indices for four aquatic communities (periphyton, macroinvertebrates, riparian vegetation, and fish. The effects that flow reduction may produce in the medium and long term can be assessed by these indices. We present the results of applying the methodology to several projects licensed by the MADS. RESUMEN Se presenta una propuesta metodológica para estimar los caudales ambientales en grandes proyectos licenciados por la Agencia Nacional de Licencias Ambientales (ANLA de Colombia, resultado de un convenio interadministrativo suscrito entre el ahora Ministerio de Ambiente y Desarrollo Sostenible (MADS de Colombia y la Universidad Nacional de Colombia, Bogotá (UNC. El método propuesto parte de garantizar criterios hidrológicos, continúa con una validación hidráulica y de calidad del agua, sigue con la determinación de la integridad del hábitat, en un proceso iterativo que requiere evaluación para las condiciones antes y después de la construcción del proyecto y que permite establecer un caudal que, además de conservar las funciones ecológicas del río, garantiza los usos del recurso aguas abajo. Espec
Low, Tchern Kuang Lambert; Tay, Kai Hong; Fang, Tina; Fung, Daniel Shuen Sheng
2017-03-01
Patients admitted to a psychiatric hospital commonly suffer from comorbid medical problems which sometimes require urgent medical attention. Twenty-two percent of emergency medical transfers from the Institute of Mental Health (IMH) to the emergency rooms of general hospitals were preventable and could be managed at IMH itself. We undertook a quality improvement project to understand the reasons behind such preventable referrals and implemented changes to address this. Using the model for improvement, we deconstructed our processes and analysed root causes for such preventable referrals. Thereafter changes were implemented with Plan-Do-Study-Act (PDSA) cycles to analyse their outcomes. During the 6-month study period, we achieved a 100% reduction in preventable referrals through strategies aimed at reducing pressure on our on-call physicians in the making of medical decisions, maximising usage of our medical resources, constant education and raising awareness of this issue. Reducing preventable transfer of inpatients from a psychiatric hospital to the emergency departments of general hospitals is a worthwhile endeavour. Such initiatives optimise use of healthcare resources, improve patient care and increase satisfaction.
Discussion on Estimation of Responsibility Cost of Highway Projects%浅谈公路项目责任成本测算
李海峰
2011-01-01
项目责任成本,作为项目精细化管理的核心,其测算尤其重要.公路项目工程由于工期长,情况变化大,材料设备类型多,地质条件差别大,因此应做好成本测算.本文将探讨如何做好公路项目责任成本测算.%The responsibility cost of the project is the core of fine management, and its estimation is particularly important. We should do well the cost estimation of highway project, because of long construction period, changing situation, many types of materials and equipment, large differences in geological conditions. This article will explore how to do well the cost estimation of highway project.
Paynter, S.; Nachabe, M.
2008-12-01
One of the most important tools in water management is the accurate forecast of both long-term and short- term extreme values for both flood and drought conditions. Traditional methods of trend detection, such as ordinary least squares (OLS) or the Mann-Kendall test, are not aptly suited for hydrologic systems while traditional methods of predicting extreme flood and drought frequencies, such as distribution fitting without parameter covariates, may be highly inaccurate in lake-type systems, especially in the short-term. In the case of lakes, traditional frequency return estimates assume extremes are independent of trend or starting lake stages. However, due to the significant autocorrelation of lake levels, the initial stage can have a significant influence on the severity of a given event. The aim of this research was to accurately identify the direction and magnitude of trends in flood and drought stages and provide more accurate predictions of both long-term and short-term flood and drought stage return frequencies utilizing the generalized extreme value distribution with time and starting stage covariates. All of the lakes researched evidenced either no trend or very small trends unlikely to significantly alter prediction of future flood or drought return levels. However, for all of the lakes significant improvement in the prediction of extremes was obtained with the inclusion of starting lake stage as a covariate. Traditional methods of predicting flood or drought stages significantly overpredict stages when starting lake stages are low and underpredict stages when starting stages are high. The difference between these predictions can be nearly two meters, a significant amount in urbanized watersheds in areas of the world with flat topography. Differences of near two meters can mean significant alterations in evacuation or other water management decisions. In addition to improving prediction of extreme events, utilizing GEV with time or starting stage
L M WANG
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master–slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Duncan, D D; Muñoz, B; Bandeen-Roche, K; West, S K
1997-04-01
To estimate the numerical value of the ocular-ambient exposure ratio (OAER) (ratio of the facial exposure to that on a horizontal plane) as a function of wavelength band, season, and job category and to establish the effect of various modifiers, such as geography and the use of hats, for use in general population studies. Two hundred sixty-four persons within several job categories representing the jobs in our Salisbury, Maryland, population were instrumented with ultraviolet-B (UVB) and visible band sensors for 1 complete day. Studies were done over all four seasons, both with and without hats. OAERs in the UVB wavelength band generally are higher than in the visible (13% versus 6%), display no significant variation with job category, show a seasonal effect (highest in the winter-spring [18%], lowest in the summer [10%], and intermediate in the fall [14%]), and are reduced 34% by the use of hats. In the visible wavelength band, OAERs are affected weakly by job function, although this variation is not significant, display a seasonal effect with three seasons as in the UVB, and are not affected significantly by the use of hats. In neither the UVB nor the visible portions of the spectrum did the authors find an effect on the OAER due to photophobia or eye color. With the authors' exposure model, the authors have at their disposal a valuable tool for exploring the relation between UVB, UVA, and visible radiation and a number of age-related eye diseases.
Bazhenov Viktor Ivanovich
2015-09-01
Full Text Available The starting stage of the tender procedures in Russia with the participation of foreign suppliers dictates the feasibility of the developments for economical methods directed to comparison of technical solutions on the construction field. The article describes the example of practical Life Cycle Cost (LCC evaluations under respect of Present Value (PV determination. These create a possibility for investor to estimate long-term projects (indicated as 25 years as commercially profitable, taking into account inflation rate, interest rate, real discount rate (indicated as 5 %. For economic analysis air-blower station of WWTP was selected as a significant energy consumer. Technical variants for the comparison of blower types are: 1 - multistage without control, 2 - multistage with VFD control, 3 - single stage double vane control. The result of LCC estimation shows the last variant as most attractive or cost-effective for investments with economy of 17,2 % (variant 1 and 21,0 % (variant 2 under adopted duty conditions and evaluations of capital costs (Cic + Cin with annual expenditure related (Ce+Co+Cm. The adopted duty conditions include daily and seasonal fluctuations of air flow. This was the reason for the adopted energy consumption as, kW∙h: 2158 (variant 1,1743...2201 (variant 2, 1058...1951 (variant 3. The article refers to Europump guide tables in order to simplify sophisticated factors search (Cp /Cn, df, which can be useful for economical analyses in Russia. Example of evaluations connected with energy-efficient solutions is given, but this reference involves the use of materials for the cases with resource savings, such as all types of fuel. In conclusion follows the assent to use LCC indicator jointly with the method of determining discounted cash flows, that will satisfy the investor’s need for interest source due to technical and economical comparisons.
Project Cost Management under General Contracting Mode%总承包模式下的工程造价管理
李晓平
2014-01-01
随着我国建筑行业不断发展，工程项目总承包模式也随之发展起来。在现代建筑工程管理中，工程项目总承包是一种新型的管理模式，弥补了传统工程承包模式中的不足，推动了工程管理工作的发展。%Along with the construction industry continues to develop in our country, the general contracting mode of project has also developed. In modern construction project managem- ent, general contracting of project is a kind of new manage- ment mode. It makes up for the deficiencies of the traditional engineering contracting mode and promotes the development of project management work.
Madhura Vijay Rane
2017-01-01
Conclusion: Salivary calcium levels can be used as a biomarker to assess the periodontal disease progression. Early diagnosis of periodontal disease by estimation of calcium levels in saliva can help in prevention of gingivitis or periodontitis by various therapeutic measures.
谈元鹏; 许刚; 赵妙颖
2015-01-01
为了实现对电力工程造价高效、精确的估算，提出了一种电力工程造价的随机权深度神经学习估算算法（Random Weighted Deep Neural Learning，RWDNL）。通过构建外权随机的带有小中间层的多隐层神经网络模型，利用神经网络深度学习实现了对海量数据有效特征的提取以及电力工程项目造价估算。数值仿真实验结果表明该方法使工程造价估算精度和速度大大提高，可获得令人满意的泛化能力。%In order to realize an efficient and accurate cost estimation of power engineering project, a cost estimation method for power engineering project with big data is proposed based on a so-called Random Weighted Deep Neural Learning(RWDNL)algorithm. By means of building a multi-layer random weighted neural network with a small central layer, effective features are extracted from mass engineering data and power engineering project cost estimation is also real-ized by neural network deep learning. The experimental results demonstrate the outstanding performances of the proposed RWDNL method on estimation time consume and estimation accuracy, as well as its satisfactory generalization ability.