WorldWideScience

Sample records for global output-error optimization

  1. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  2. Minimum Symbol Error Rate Detection in Single-Input Multiple-Output Channels with Markov Noise

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.

    2005-01-01

    Minimum symbol error rate detection in Single-Input Multiple- Output(SIMO) channels with Markov noise is presented. The special case of zero-mean Gauss-Markov noise is examined closer as it only requires knowledge of the second-order moments. In this special case, it is shown that optimal detection...

  3. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  4. Design of optimal input–output scaling factors based fuzzy PSS using bat algorithm

    Directory of Open Access Journals (Sweden)

    D.K. Sambariya

    2016-06-01

    Full Text Available In this article, a fuzzy logic based power system stabilizer (FPSS is designed by tuning its input–output scaling factors. Two input signals to FPSS are considered as change of speed and change in power, and the output signal is considered as a correcting voltage signal. The normalizing factors of these signals are considered as the optimization problem with minimization of integral of square error in single-machine and multi-machine power systems. These factors are optimally determined with bat algorithm (BA and considered as scaling factors of FPSS. The performance of power system with such a designed BA based FPSS (BA-FPSS is compared to that of response with FPSS, Harmony Search Algorithm based FPSS (HSA-FPSS and Particle Swarm Optimization based FPSS (PSO-FPSS. The systems considered are single-machine connected to infinite-bus, two-area 4-machine 10-bus and IEEE New England 10-machine 39-bus power systems for evaluating the performance of BA-FPSS. The comparison is carried out in terms of the integral of time-weighted absolute error (ITAE, integral of absolute error (IAE and integral of square error (ISE of speed response for systems with FPSS, HSA-FPSS and BA-FPSS. The superior performance of systems with BA-FPSS is established considering eight plant conditions of each system, which represents the wide range of operating conditions.

  5. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  6. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  7. Error sensitivity to refinement: a criterion for optimal grid adaptation

    Science.gov (United States)

    Luchini, Paolo; Giannetti, Flavio; Citro, Vincenzo

    2017-12-01

    Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.

  8. Specification and Aggregation Errors in Environmentally Extended Input-Output Models

    NARCIS (Netherlands)

    Bouwmeester, Maaike C.; Oosterhaven, Jan

    This article considers the specification and aggregation errors that arise from estimating embodied emissions and embodied water use with environmentally extended national input-output (IO) models, instead of with an environmentally extended international IO model. Model specification errors result

  9. Effects of Measurement Error on the Output Gap in Japan

    OpenAIRE

    Koichiro Kamada; Kazuto Masuda

    2000-01-01

    Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...

  10. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  11. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  12. Downscaling Global Weather Forecast Outputs Using ANN for Flood Prediction

    Directory of Open Access Journals (Sweden)

    Nam Do Hoai

    2011-01-01

    Full Text Available Downscaling global weather prediction model outputs to individual locations or local scales is a common practice for operational weather forecast in order to correct the model outputs at subgrid scales. This paper presents an empirical-statistical downscaling method for precipitation prediction which uses a feed-forward multilayer perceptron (MLP neural network. The MLP architecture was optimized by considering physical bases that determine the circulation of atmospheric variables. Downscaled precipitation was then used as inputs to the super tank model (runoff model for flood prediction. The case study was conducted for the Thu Bon River Basin, located in Central Vietnam. Study results showed that the precipitation predicted by MLP outperformed that directly obtained from model outputs or downscaled using multiple linear regression. Consequently, flood forecast based on the downscaled precipitation was very encouraging. It has demonstrated as a robust technology, simple to implement, reliable, and universal application for flood prediction through the combination of downscaling model and super tank model.

  13. Input-output interactions and optimal monetary policy

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    2011-01-01

    This paper deals with the implications of factor demand linkages for monetary policy design in a two-sector dynamic general equilibrium model. Part of the output of each sector serves as a production input in both sectors, in accordance with a realistic input–output structure. Strategic...... complementarities induced by factor demand linkages significantly alter the transmission of shocks and amplify the loss of social welfare under optimal monetary policy, compared to what is observed in standard two-sector models. The distinction between value added and gross output that naturally arises...... in this context is of key importance to explore the welfare properties of the model economy. A flexible inflation targeting regime is close to optimal only if the central bank balances inflation and value added variability. Otherwise, targeting gross output variability entails a substantial increase in the loss...

  14. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  15. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  16. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    Science.gov (United States)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  17. Total output operation chart optimization of cascade reservoirs and its application

    International Nuclear Information System (INIS)

    Jiang, Zhiqiang; Ji, Changming; Sun, Ping; Wang, Liping; Zhang, Yanke

    2014-01-01

    Highlights: • We propose a new double nested model for cascade reservoirs operation optimization. • We use two methods to extract the output distribution ratio. • The adopted two methods perform better than the widely used methods at present. • Stepwise regression method performs better than mean value method on the whole. - Abstract: With the rapid development of cascade hydropower stations in recent decades, the cascade system composed of multiple reservoirs needs unified operation and management. However, the output distribution problem has not yet been solved reasonably when the total output of cascade system obtained, which makes the full utilization of hydropower resources in cascade reservoirs very difficult. Discriminant criterion method is a traditional and common method to solve the output distribution problem at present, but some shortcomings cannot be ignored in the practical application. In response to the above concern, this paper proposes a new total output operation chart optimization model and a new optimal output distribution model, the two models constitute to a double nested model with the goal of maximizing power generation. This paper takes the cascade reservoirs of Li Xianjiang River in China as an instance to obtain the optimal total output operation chart by the proposed double nested model and the 43 years historical runoff data, progressive searching method and progressive optimality algorithm are used in solving the model. In order to take the obtained total output operation chart into practical operation, mean value method and stepwise regression method are adopted to extract the output distribution ratios on the basis of the optimal simulation intermediate data. By comparing with discriminant criterion method and conventional method, the combined utilization of total output operation chart and output distribution ratios presents better performance in terms of power generation and assurance rate, which proves it is an effective

  18. IMPACT OF TRADE OPENNESS ON OUTPUT GROWTH: CO INTEGRATION AND ERROR CORRECTION MODEL APPROACH

    Directory of Open Access Journals (Sweden)

    Asma Arif

    2012-01-01

    Full Text Available This study analyzed the long run relationship between trade openness and output growth for Pakistan using annual time series data for 1972-2010. This study follows the Engle and Granger co integration analysis and error correction approach to analyze the long run relationship between the two variables. The Error Correction Term (ECT for output growth and trade openness is significant at 5% level of significance and indicates a positive long run relation between the variables. This study has also analyzed the causality between trade openness and output growth by using granger causality test. The results of granger causality show that there is a bi-directional significant relationship between trade openness and economic growth.

  19. Relaxed error control in shape optimization that utilizes remeshing

    CSIR Research Space (South Africa)

    Wilke, DN

    2013-02-01

    Full Text Available Shape optimization strategies based on error indicators usually require strict error control for every computed design during the optimization run. The strict error control serves two purposes. Firstly, it allows for the accurate computation...

  20. Global Optimization of Minority Game by Smart Agents

    OpenAIRE

    Yan-Bo Xie; Bing-Hong Wang; Chin-Kun Hu; Tao Zhou

    2004-01-01

    We propose a new model of minority game with so-called smart agents such that the standard deviation and the total loss in this model reach the theoretical minimum values in the limit of long time. The smart agents use trail and error method to make a choice but bring global optimization to the system, which suggests that the economic systems may have the ability to self-organize into a highly optimized state by agents who are forced to make decisions based on inductive thinking for their lim...

  1. Robust output LQ optimal control via integral sliding modes

    CERN Document Server

    Fridman, Leonid; Bejarano, Francisco Javier

    2014-01-01

    Featuring original research from well-known experts in the field of sliding mode control, this monograph presents new design schemes for implementing LQ control solutions in situations where the output system is the only information provided about the state of the plant. This new design works under the restrictions of matched disturbances without losing its desirable features. On the cutting-edge of optimal control research, Robust Output LQ Optimal Control via Integral Sliding Modes is an excellent resource for both graduate students and professionals involved in linear systems, optimal control, observation of systems with unknown inputs, and automatization. In the theory of optimal control, the linear quadratic (LQ) optimal problem plays an important role due to its physical meaning, and its solution is easily given by an algebraic Riccati equation. This solution turns out to be restrictive, however, because of two assumptions: the system must be free from disturbances and the entire state vector must be kn...

  2. A Constraint programming-based genetic algorithm for capacity output optimization

    Directory of Open Access Journals (Sweden)

    Kate Ean Nee Goh

    2014-10-01

    Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.

  3. Global optimization based on noisy evaluations: An empirical study of two statistical approaches

    International Nuclear Information System (INIS)

    Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric

    2008-01-01

    The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.

  4. Optimizing microwave photodetection: input-output theory

    Science.gov (United States)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  5. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  6. Programmed evolution for optimization of orthogonal metabolic output in bacteria.

    Directory of Open Access Journals (Sweden)

    Todd T Eckdahl

    Full Text Available Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields - evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in

  7. Programmed Evolution for Optimization of Orthogonal Metabolic Output in Bacteria

    Science.gov (United States)

    Eckdahl, Todd T.; Campbell, A. Malcolm; Heyer, Laurie J.; Poet, Jeffrey L.; Blauch, David N.; Snyder, Nicole L.; Atchley, Dustin T.; Baker, Erich J.; Brown, Micah; Brunner, Elizabeth C.; Callen, Sean A.; Campbell, Jesse S.; Carr, Caleb J.; Carr, David R.; Chadinha, Spencer A.; Chester, Grace I.; Chester, Josh; Clarkson, Ben R.; Cochran, Kelly E.; Doherty, Shannon E.; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M.; Evans, Rebecca A.; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L.; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L.; Keffeler, Erica C.; Lantz, Andrew J.; Lim, Jonathan N.; McGuire, Erin P.; Moore, Alexander K.; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A.; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E.; Polpityaarachchige, Sachith; Quaney, Michael J.; Slattery, Abagael; Smith, Kathryn E.; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J.; Whitesides, E. Tucker

    2015-01-01

    Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields – evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy

  8. Identifying strategies for mitigating the global warming impact of the EU-25 economy using a multi-objective input–output approach

    International Nuclear Information System (INIS)

    Cortés-Borda, D.; Ruiz-Hernández, A.; Guillén-Gosálbez, G.; Llop, M.; Guimerà, R.; Sales-Pardo, M.

    2015-01-01

    Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi-criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO 2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policy makers in the design of more effective public policies for achieving the environmental and economic targets sought. - Highlights: • We minimize climate change by performing small changes in the consumption habits. • We propose a tool that combines multiobjective optimization and macroeconomic models. • Identifying key sectors allows improving the environmental performance significantly with little impact to the economy. • Significant reductions in global warming potential are attained by regulating sectors. • Our tool aids policy makers in the design of effective sustainability policies

  9. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr−1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr−1 in North America to 7 Tg yr−1 in Boreal Eurasia (from 23 to 48%, respectively. At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly

  10. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  11. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  12. Global optimization of minority game by intelligent agents

    Science.gov (United States)

    Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao

    2005-10-01

    We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.

  13. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  14. Global Sensitivity Analysis for multivariate output using Polynomial Chaos Expansion

    International Nuclear Information System (INIS)

    Garcia-Cabrejo, Oscar; Valocchi, Albert

    2014-01-01

    Many mathematical and computational models used in engineering produce multivariate output that shows some degree of correlation. However, conventional approaches to Global Sensitivity Analysis (GSA) assume that the output variable is scalar. These approaches are applied on each output variable leading to a large number of sensitivity indices that shows a high degree of redundancy making the interpretation of the results difficult. Two approaches have been proposed for GSA in the case of multivariate output: output decomposition approach [9] and covariance decomposition approach [14] but they are computationally intensive for most practical problems. In this paper, Polynomial Chaos Expansion (PCE) is used for an efficient GSA with multivariate output. The results indicate that PCE allows efficient estimation of the covariance matrix and GSA on the coefficients in the approach defined by Campbell et al. [9], and the development of analytical expressions for the multivariate sensitivity indices defined by Gamboa et al. [14]. - Highlights: • PCE increases computational efficiency in 2 approaches of GSA of multivariate output. • Efficient estimation of covariance matrix of output from coefficients of PCE. • Efficient GSA on coefficients of orthogonal decomposition of the output using PCE. • Analytical expressions of multivariate sensitivity indices from coefficients of PCE

  15. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  16. Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches

    Directory of Open Access Journals (Sweden)

    Jui-Yu Wu

    2013-01-01

    Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.

  17. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  18. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  19. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  20. Expected Improvement in Efficient Global Optimization Through Bootstrapped Kriging - Replaced by CentER DP 2011-015

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; van Beers, W.C.M.; van Nieuwenhuyse, I.

    2010-01-01

    This paper uses a sequentialized experimental design to select simulation input com- binations for global optimization, based on Kriging (also called Gaussian process or spatial correlation modeling); this Kriging is used to analyze the input/output data of the simulation model (computer code). This

  1. The relationship between global oil price shocks and China's output: A time-varying analysis

    International Nuclear Information System (INIS)

    Cross, Jamie; Nguyen, Bao H.

    2017-01-01

    We employ a class of time-varying Bayesian vector autoregressive (VAR) models on new standard dataset of China's GDP constructed by to examine the relationship between China's economic growth and global oil market fluctuations between 1992Q1 and 2015Q3. We find that: (1) the time varying parameter VAR with stochastic volatility provides a better fit as compared to it's constant counterparts; (2) the impacts of intertemporal global oil price shocks on China's output are often small and temporary in nature; (3) oil supply and specific oil demand shocks generally produce negative movements in China's GDP growth whilst oil demand shocks tend to have positive effects; (4) domestic output shocks have no significant impact on price or quantity movements within the global oil market. The results are generally robust to three commonly employed indicators of global economic activity: Kilian's global real economic activity index, the metal price index and the global industrial production index, and two alternative oil price metrics: the US refiners' acquisition cost for imported crude oil and the West Texas Intermediate price of crude oil. - Highlights: • A class of time-varying BVARs is used to examine the relationship between China's economic growth and global oil market fluctuations. • The impacts of intertemporal global oil price shocks on China's output are often small and temporary in nature. • Oil supply and specific oil demand shocks generally produce negative movements in China's GDP growth while oil demand shocks tend to have positive effects. • Domestic output shocks have no significant impact on price or quantity movements within the global oil market.

  2. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  3. Global sensitivity analysis for models with spatially dependent outputs

    International Nuclear Information System (INIS)

    Iooss, B.; Marrel, A.; Jullien, M.; Laurent, B.

    2011-01-01

    The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Meta-model-based techniques have been developed in order to replace the CPU time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common meta-model-based sensitivity analysis methods are well suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the meta-modeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained. (authors)

  4. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  5. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  6. Output regulation control for switched stochastic delay systems with dissipative property under error-dependent switching

    Science.gov (United States)

    Li, L. L.; Jin, C. L.; Ge, X.

    2018-01-01

    In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.

  7. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  8. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  9. Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers

    OpenAIRE

    Richard Billich; Jakub Štvrtňa; Karel Jelen

    2015-01-01

    Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers In today’s world of strength training there are many myths surrounding effective exercising with the least possible negative effect on one’s health. In this experiment we focus on the finding of a relationship between maximum output, used load and the velocity with which the exercise is performed. The main objective is to find the optimal speed of the exercise motion which would allow us to reach the ma...

  10. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  11. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Science.gov (United States)

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  12. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  13. Hepatic glucose output in humans measured with labeled glucose to reduce negative errors

    International Nuclear Information System (INIS)

    Levy, J.C.; Brown, G.; Matthews, D.R.; Turner, R.C.

    1989-01-01

    Steele and others have suggested that minimizing changes in glucose specific activity when estimating hepatic glucose output (HGO) during glucose infusions could reduce non-steady-state errors. This approach was assessed in nondiabetic and type II diabetic subjects during constant low dose [27 mumol.kg ideal body wt (IBW)-1.min-1] glucose infusion followed by a 12 mmol/l hyperglycemic clamp. Eight subjects had paired tests with and without labeled infusions. Labeled infusion was used to compare HGO in 11 nondiabetic and 15 diabetic subjects. Whereas unlabeled infusions produced negative values for endogenous glucose output, labeled infusions largely eliminated this error and reduced the dependence of the Steele model on the pool fraction in the paired tests. By use of labeled infusions, 11 nondiabetic subjects suppressed HGO from 10.2 +/- 0.6 (SE) fasting to 0.8 +/- 0.9 mumol.kg IBW-1.min-1 after 90 min of glucose infusion and to -1.9 +/- 0.5 mumol.kg IBW-1.min-1 after 90 min of a 12 mmol/l glucose clamp, but 15 diabetic subjects suppressed only partially from 13.0 +/- 0.9 fasting to 5.7 +/- 1.2 at the end of the glucose infusion and 5.6 +/- 1.0 mumol.kg IBW-1.min-1 in the clamp (P = 0.02, 0.002, and less than 0.001, respectively)

  14. Optimal time-domain combination of the two calibrated output quadratures of GEO 600

    International Nuclear Information System (INIS)

    Hewitson, M; Grote, H; Hild, S; Lueck, H; Ajith, P; Smith, J R; Strain, K A; Willke, B; Woan, G

    2005-01-01

    GEO 600 is an interferometric gravitational wave detector with a 600 m arm-length and which uses a dual-recycled optical configuration to give enhanced sensitivity over certain frequencies in the detection band. Due to the dual-recycling, GEO 600 has two main output signals, both of which potentially contain gravitational wave signals. These two outputs are calibrated to strain using a time-domain method. In order to simplify the analysis of the GEO 600 data set, it is desirable to combine these two calibrated outputs to form a single strain signal that has optimal signal-to-noise ratio across the detection band. This paper describes a time-domain method for doing this combination. The method presented is similar to one developed for optimally combining the outputs of two colocated gravitational wave detectors. In the scheme presented in this paper, some simplifications are made to allow its implementation using time-domain methods

  15. Convex analysis and global optimization

    CERN Document Server

    Tuy, Hoang

    2016-01-01

    This book presents state-of-the-art results and methodologies in modern global optimization, and has been a staple reference for researchers, engineers, advanced students (also in applied mathematics), and practitioners in various fields of engineering. The second edition has been brought up to date and continues to develop a coherent and rigorous theory of deterministic global optimization, highlighting the essential role of convex analysis. The text has been revised and expanded to meet the needs of research, education, and applications for many years to come. Updates for this new edition include: · Discussion of modern approaches to minimax, fixed point, and equilibrium theorems, and to nonconvex optimization; · Increased focus on dealing more efficiently with ill-posed problems of global optimization, particularly those with hard constraints;

  16. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  17. Conjugate descent formulation of backpropagation error in ...

    African Journals Online (AJOL)

    The feedforward neural network architecture uses backpropagation learning to determine optimal weights between dierent interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to ...

  18. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  19. On the efficiency of chaos optimization algorithms for global optimization

    International Nuclear Information System (INIS)

    Yang Dixiong; Li Gang; Cheng Gengdong

    2007-01-01

    Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions

  20. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  1. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  2. FEM for time-fractional diffusion equations, novel optimal error analyses

    OpenAIRE

    Mustapha, Kassem

    2016-01-01

    A semidiscrete Galerkin finite element method applied to time-fractional diffusion equations with time-space dependent diffusivity on bounded convex spatial domains will be studied. The main focus is on achieving optimal error results with respect to both the convergence order of the approximate solution and the regularity of the initial data. By using novel energy arguments, for each fixed time $t$, optimal error bounds in the spatial $L^2$- and $H^1$-norms are derived for both cases: smooth...

  3. Dynamic Output Feedback Robust MPC with Input Saturation Based on Zonotopic Set-Membership Estimation

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2016-01-01

    Full Text Available For quasi-linear parameter varying (quasi-LPV systems with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC with the consideration of input saturation is investigated. The saturated dynamic output feedback controller is represented by a convex hull involving the actual dynamic output controller and an introduced auxiliary controller. By taking both the actual output feedback controller and the auxiliary controller with a parameter-dependent form, the main optimization problem can be formulated as convex optimization. The consideration of input saturation in the main optimization problem reduces the conservatism of dynamic output feedback controller design. The estimation error set and bounded disturbance are represented by zonotopes and refreshed by zonotopic set-membership estimation. Compared with the previous results, the proposed algorithm can not only guarantee the recursive feasibility of the optimization problem, but also improve the control performance at the cost of higher computational burden. A nonlinear continuous stirred tank reactor (CSTR example is given to illustrate the effectiveness of the approach.

  4. Essays and surveys in global optimization

    CERN Document Server

    Audet, Charles; Savard, Giles

    2005-01-01

    Global optimization aims at solving the most general problems of deterministic mathematical programming. In addition, once the solutions are found, this methodology is also expected to prove their optimality. With these difficulties in mind, global optimization is becoming an increasingly powerful and important methodology. This book is the most recent examination of its mathematical capability, power, and wide ranging solutions to many fields in the applied sciences.

  5. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  6. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

  7. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  8. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    Science.gov (United States)

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.

  9. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    Science.gov (United States)

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  10. GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.

    Energy Technology Data Exchange (ETDEWEB)

    D' Helon, CD

    2004-08-18

    The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.

  11. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    Science.gov (United States)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  12. Area/latency optimized early output asynchronous full adders and relative-timed ripple carry adders.

    Science.gov (United States)

    Balasubramanian, P; Yamashita, S

    2016-01-01

    This article presents two area/latency optimized gate level asynchronous full adder designs which correspond to early output logic. The proposed full adders are constructed using the delay-insensitive dual-rail code and adhere to the four-phase return-to-zero handshaking. For an asynchronous ripple carry adder (RCA) constructed using the proposed early output full adders, the relative-timing assumption becomes necessary and the inherent advantages of the relative-timed RCA are: (1) computation with valid inputs, i.e., forward latency is data-dependent, and (2) computation with spacer inputs involves a bare minimum constant reverse latency of just one full adder delay, thus resulting in the optimal cycle time. With respect to different 32-bit RCA implementations, and in comparison with the optimized strong-indication, weak-indication, and early output full adder designs, one of the proposed early output full adders achieves respective reductions in latency by 67.8, 12.3 and 6.1 %, while the other proposed early output full adder achieves corresponding reductions in area by 32.6, 24.6 and 6.9 %, with practically no power penalty. Further, the proposed early output full adders based asynchronous RCAs enable minimum reductions in cycle time by 83.4, 15, and 8.8 % when considering carry-propagation over the entire RCA width of 32-bits, and maximum reductions in cycle time by 97.5, 27.4, and 22.4 % for the consideration of a typical carry chain length of 4 full adder stages, when compared to the least of the cycle time estimates of various strong-indication, weak-indication, and early output asynchronous RCAs of similar size. All the asynchronous full adders and RCAs were realized using standard cells in a semi-custom design fashion based on a 32/28 nm CMOS process technology.

  13. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  14. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui

    2010-09-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  15. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui; Xie, Huizhi; Wu, Wenzhuo; Roshan Joseph, V.; Jeff Wu, C. F.; Wang, Zhong Lin

    2010-01-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  16. Global output feedback stabilisation of stochastic high-order feedforward nonlinear systems with time-delay

    Science.gov (United States)

    Zhang, Kemei; Zhao, Cong-Ran; Xie, Xue-Jun

    2015-12-01

    This paper considers the problem of output feedback stabilisation for stochastic high-order feedforward nonlinear systems with time-varying delay. By using the homogeneous domination theory and solving several troublesome obstacles in the design and analysis, an output feedback controller is constructed to drive the closed-loop system globally asymptotically stable in probability.

  17. Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2014-01-01

    Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.

  18. Analysing global value chains using input-output economics : proceed with care

    NARCIS (Netherlands)

    Nomaler, Z.O.; Verspagen, B.

    2014-01-01

    Input-output economics has become a popular tool to analyse the international fragmentation of value chains, especially now that several multi-regional tables that cover large parts of the global economy have become available. It has been argued that these tables, when analysed with the help of the

  19. Analysing global value chains using input-output economics: Proceed with care

    NARCIS (Netherlands)

    Nomaler, Ö.; Verspagen, B.

    2014-01-01

    Input-output economics has become a popular tool to analyse the international fragmentation of value chains, especially now that several multi-regional tables that cover large parts of the global economy have become available. It has been argued that these tables, when analysed with the help of the

  20. Optimization of the Energy Output of Osmotic Power Plants

    Directory of Open Access Journals (Sweden)

    Florian Dinger

    2013-01-01

    Full Text Available On the way to a completely renewable energy supply, additional alternatives to hydroelectric, wind, and solar power have to be investigated. Osmotic power is such an alternative with a theoretical global annual potential of up to 14400 TWh (70% of the global electricity consumption of 2008 per year. It utilizes the phenomenon that upon the mixing of fresh water and oceanic salt water (e.g., at a river mouth, around 2.88 MJ of energy per 1 m3 of fresh water is released. Here, we describe a new approach to derive operational parameter settings for osmotic power plants using a pressure exchanger for optimal performance, either with respect to maximum generated power or maximum extracted energy. Up to now, only power optimization is discussed in the literature, but when considering the fresh water supply as a limiting factor, the energy optimization appears as the challenging task.

  1. An Optimal Augmented Monotonic Tracking Controller for Aircraft Engines with Output Constraints

    Directory of Open Access Journals (Sweden)

    Jiakun Qin

    2017-01-01

    Full Text Available This paper proposes a novel min-max control scheme for aircraft engines, with the aim of transferring a set of regulated outputs between two set-points, while ensuring a set of auxiliary outputs remain within prescribed constraints. In view of this, an optimal augmented monotonic tracking controller (OAMTC is proposed, by considering a linear plant with input integration, to enhance the ability of the control system to reject uncertainty in system parameters and ensure no crossing limits. The key idea is to use the eigenvalue and eigenvector placement method and genetic algorithms to shape the output responses. The approach is validated by numerical simulation. The results show that the designed OAMTC controller can achieve a satisfactory dynamic and steady performance and keep the auxiliary outputs within constraints in the transient regime.

  2. Observations of geographically correlated orbit errors for TOPEX/Poseidon using the global positioning system

    Science.gov (United States)

    Christensen, E. J.; Haines, B. J.; Mccoll, K. C.; Nerem, R. S.

    1994-01-01

    We have compared Global Positioning System (GPS)-based dynamic and reduced-dynamic TOPEX/Poseidon orbits over three 10-day repeat cycles of the ground-track. The results suggest that the prelaunch joint gravity model (JGM-1) introduces geographically correlated errors (GCEs) which have a strong meridional dependence. The global distribution and magnitude of these GCEs are consistent with a prelaunch covariance analysis, with estimated and predicted global rms error statistics of 2.3 and 2.4 cm rms, respectively. Repeating the analysis with the post-launch joint gravity model (JGM-2) suggests that a portion of the meridional dependence observed in JGM-1 still remains, with global rms error of 1.2 cm.

  3. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  4. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  5. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  6. Global optimization methods for engineering design

    Science.gov (United States)

    Arora, Jasbir S.

    1990-01-01

    The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.

  7. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    OpenAIRE

    Cohen, Aaron M.

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  8. Robust topology optimization accounting for spatially varying manufacturing errors

    DEFF Research Database (Denmark)

    Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...

  9. Optimization of output power and transmission efficiency of magnetically coupled resonance wireless power transfer system

    Science.gov (United States)

    Yan, Rongge; Guo, Xiaoting; Cao, Shaoqing; Zhang, Changgeng

    2018-05-01

    Magnetically coupled resonance (MCR) wireless power transfer (WPT) system is a promising technology in electric energy transmission. But, if its system parameters are designed unreasonably, output power and transmission efficiency will be low. Therefore, optimized parameters design of MCR WPT has important research value. In the MCR WPT system with designated coil structure, the main parameters affecting output power and transmission efficiency are the distance between the coils, the resonance frequency and the resistance of the load. Based on the established mathematical model and the differential evolution algorithm, the change of output power and transmission efficiency with parameters can be simulated. From the simulation results, it can be seen that output power and transmission efficiency of the two-coil MCR WPT system and four-coil one with designated coil structure are improved. The simulation results confirm the validity of the optimization method for MCR WPT system with designated coil structure.

  10. Step-by-step optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of ppμ, ddμ, ttμ homonuclear mesomolecules within the error ≤±0.001 eV. The global chaos method turned out to be well applicable to nuclear 3 H and 3 He systems

  11. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  12. Introduction to Nonlinear and Global Optimization

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Tóth, B.

    2010-01-01

    This self-contained text provides a solid introduction to global and nonlinear optimization, providing students of mathematics and interdisciplinary sciences with a strong foundation in applied optimization techniques. The book offers a unique hands-on and critical approach to applied optimization

  13. Conference on Convex Analysis and Global Optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    There has been much recent progress in global optimization algo­ rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun­ damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon­ vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en­ dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by th...

  14. Negotiation and Optimality in an Economic Model of Global Climate Change

    International Nuclear Information System (INIS)

    Gottinger, H.

    2000-03-01

    The paper addresses the problem of governmental intervention in a multi-country regime of controlling global climate change. Using a simplified case of a two-country, two-sector general equilibrium model the paper shows that the global optimal time path of economic outputs and temperature will converge to a unique steady state provided that consumers care enough about the future. To answer a set of questions relating to 'what will happen if governments decide to correct the problem of global warming?' we study the equilibrium outcome in a bargaining game where two countries negotiate an agreement on future consumption and production plans for the purpose of correcting the problem of climate change. It is shown that the agreement arising from such a negotiation process achieves the best outcome and that it can be implemented in decentralised economies by a system of taxes, subsidies and transfers. By employing the recent advances in non-cooperative bargaining theory, the agreement between two countries is derived endogenously through a well-specified bargaining procedure

  15. Negotiation and Optimality in an Economic Model of Global Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Gottinger, H. [International Institute for Environmental Economics and Management IIEEM, University of Maastricht, Maastricht (Netherlands)

    2000-03-01

    The paper addresses the problem of governmental intervention in a multi-country regime of controlling global climate change. Using a simplified case of a two-country, two-sector general equilibrium model the paper shows that the global optimal time path of economic outputs and temperature will converge to a unique steady state provided that consumers care enough about the future. To answer a set of questions relating to 'what will happen if governments decide to correct the problem of global warming?' we study the equilibrium outcome in a bargaining game where two countries negotiate an agreement on future consumption and production plans for the purpose of correcting the problem of climate change. It is shown that the agreement arising from such a negotiation process achieves the best outcome and that it can be implemented in decentralised economies by a system of taxes, subsidies and transfers. By employing the recent advances in non-cooperative bargaining theory, the agreement between two countries is derived endogenously through a well-specified bargaining procedure.

  16. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  17. Static inverter with synchronous output waveform synthesized by time-optimal-response feedback

    Science.gov (United States)

    Kernick, A.; Stechschulte, D. L.; Shireman, D. W.

    1976-01-01

    Time-optimal-response 'bang-bang' or 'bang-hang' technique, using four feedback control loops, synthesizes static-inverter sinusoidal output waveform by self-oscillatory but yet synchronous pulse-frequency-modulation (SPFM). A single modular power stage per phase of ac output entails the minimum of circuit complexity while providing by feedback synthesis individual phase voltage regulation, phase position control and inherent compensation simultaneously for line and load disturbances. Clipped sinewave performance is described under off-limit load or input voltage conditions. Also, approaches to high power levels, 3-phase arraying and parallel modular connection are given.

  18. Stochastic and global optimization

    National Research Council Canada - National Science Library

    Dzemyda, Gintautas; Šaltenis, Vydūnas; Zhilinskas, A; Mockus, Jonas

    2002-01-01

    ... and Effectiveness of Controlled Random Search E. M. T. Hendrix, P. M. Ortigosa and I. García 129 9. Discrete Backtracking Adaptive Search for Global Optimization B. P. Kristinsdottir, Z. B. Zabinsky and...

  19. Step-by-step optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    Energy Technology Data Exchange (ETDEWEB)

    Frolov, A M

    1986-09-01

    Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of pp..mu.., dd..mu.., tt..mu.. homonuclear mesomolecules within the error less than or equal to+-0.001 eV. The global chaos method turned out to be well applicable to nuclear /sup 3/H and /sup 3/He systems.

  20. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  1. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  2. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  3. Global optimization and simulated annealing

    NARCIS (Netherlands)

    Dekkers, A.; Aarts, E.H.L.

    1988-01-01

    In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing

  4. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

    Science.gov (United States)

    Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

    1997-01-01

    The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate

  5. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  6. Validation and Error Characterization for the Global Precipitation Measurement

    Science.gov (United States)

    Bidwell, Steven W.; Adams, W. J.; Everett, D. F.; Smith, E. A.; Yuter, S. E.

    2003-01-01

    The Global Precipitation Measurement (GPM) is an international effort to increase scientific knowledge on the global water cycle with specific goals of improving the understanding and the predictions of climate, weather, and hydrology. These goals will be achieved through several satellites specifically dedicated to GPM along with the integration of numerous meteorological satellite data streams from international and domestic partners. The GPM effort is led by the National Aeronautics and Space Administration (NASA) of the United States and the National Space Development Agency (NASDA) of Japan. In addition to the spaceborne assets, international and domestic partners will provide ground-based resources for validating the satellite observations and retrievals. This paper describes the validation effort of Global Precipitation Measurement to provide quantitative estimates on the errors of the GPM satellite retrievals. The GPM validation approach will build upon the research experience of the Tropical Rainfall Measuring Mission (TRMM) retrieval comparisons and its validation program. The GPM ground validation program will employ instrumentation, physical infrastructure, and research capabilities at Supersites located in important meteorological regimes of the globe. NASA will provide two Supersites, one in a tropical oceanic and the other in a mid-latitude continental regime. GPM international partners will provide Supersites for other important regimes. Those objectives or regimes not addressed by Supersites will be covered through focused field experiments. This paper describes the specific errors that GPM ground validation will address, quantify, and relate to the GPM satellite physical retrievals. GPM will attempt to identify the source of errors within retrievals including those of instrument calibration, retrieval physical assumptions, and algorithm applicability. With the identification of error sources, improvements will be made to the respective calibration

  7. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Science.gov (United States)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  8. System-Level Optimization of a DAC for Hearing-Aid Audio Class D Output Stage

    DEFF Research Database (Denmark)

    Pracný, Peter; Jørgensen, Ivan Harald Holger; Bruun, Erik

    2013-01-01

    This paper deals with system-level optimization of a digital-to-analog converter (DAC) for hearing-aid audio Class D output stage. We discuss the ΣΔ modulator system-level design parameters – the order, the oversampling ratio (OSR) and the number of bits in the quantizer. We show that combining...... by comparing two ΣΔ modulator designs. The proposed optimization has impact on the whole hearing-aid audio back-end system including less hardware in the interpolation filter and half the switching rate in the digital-pulse-width-modulation (DPWM) block and Class D output stage...... a reduction of the OSR with an increase of the order results in considerable power savings while the audio quality is kept. For further savings in the ΣΔ modulator, overdesign and subsequent coarse coefficient quantization are used. A figure of merit (FOM) is introduced to confirm this optimization approach...

  9. Relationship between cardiac output and effective renal plasma flow in patients with cardiac disease

    Energy Technology Data Exchange (ETDEWEB)

    McGriffin, D; Tauxe, W N; Lewis, C; Karp, R; Mantle, J

    1984-12-01

    The relationship between effective renal plasma flow (ERPF) and cardiac output was examined in 46 patients (22 with congestive heart failure and 24 following cardiac surgical procedures) by simultaneously measuring the global ERPF by the single-injection method and cardiac output by the thermodilution method. Of the patients in the heart-failure group, 21 also had pulmonary artery end diastolic pressure (PAEDP) recorded at the same time. ERPF and cardiac output were found to be related by the regression equations: cardiac output = 2.08 + 0.0065 ERPF (r, 080), with a SE of estimate of 0.81 l/min. ERPF and PAEDP were related by the regression equation: PAEDP = 42.02 - 0.0675 ERPF (r, 0.86), with a SE of estimate of 5.5 mm Hg. ERPF may be a useful noninvasive method of estimating cardiac output if it is known that no intrinsic kidney disease is present, and if the error of 0.81 l/min (1 SE of estimate) is within the range of clinical usefulness. The error is principally attributable to the determination of cardiac output by the thermodilution method.

  10. Quantification of scientific output in cardiovascular medicine: A perspective based on global data

    NARCIS (Netherlands)

    G.A. Rodriguez-Granillo (Gaston); A. Rodriguez (Alfredo Chapin); N. Bruining (Nico); J. Milei (José); J. Aoki (Jiro); K. Tsuchida (Keiichi); R. del Valle-Fernández (Raquel); C.A. Arampatzis (Chourmouzios); A.T.L. Ong (Andrew); P.A. Lemos Neto (Pedro); R. Ayala (Rosa); H.M. Garcia-Garcia (Hector); F. Saia (Francesco); M. Valgimigli (Marco); E.S. Regar (Eveline); E. McFadden (Eugene); G.G. Biondi-Zoccai (Giuseppe); E. Barbenza (Ezequiel); P. Schoenhagen (Paul); P.W.J.C. Serruys (Patrick)

    2013-01-01

    textabstractAims: We sought to explore whether global and regional scientific output in cardiovascular medicine is associated with economic variables and follows the same trend as medicine and as science overall. Methods and results: We registered the number of documents, number of citations,

  11. Optimal full motion video registration with rigorous error propagation

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  12. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  13. Cut-off Grade Optimization for Maximizing the Output Rate

    Directory of Open Access Journals (Sweden)

    A. Khodayari

    2012-12-01

    Full Text Available In the open-pit mining, one of the first decisions that must be made in production planning stage, after completing the design of final pit limits, is determining of the processing plant cut-off grade. Since this grade has an essential effect on operations, choosing the optimum cut-off grade is of considerable importance. Different goals may be used for determining optimum cut-off grade. One of these goals may be maximizing the output rate (amount of product per year, which is very important, especially from marketing and market share points of view. Objective of this research is determining the optimum cut-off grade of processing plant in order to maximize output rate. For performing this optimization, an Operations Research (OR model has been developed. The object function of this model is output rate that must be maximized. This model has two operational constraints namely mining and processing restrictions. For solving the model a heuristic method has been developed. Results of research show that the optimum cut-off grade for satisfying pre-stated goal is the balancing grade of mining and processing operations, and maximum production rate is a function of the maximum capacity of processing plant and average grade of ore that according to the above optimum cut-off grade must be sent to the plant.

  14. Research of Compound Control for DC Motor System Based on Global Sliding Mode Disturbance Observer

    Directory of Open Access Journals (Sweden)

    He Zhang

    2014-01-01

    Full Text Available Aiming at the problems of modeling errors, parameter variations, and load moment disturbances in DC motor control system, one global sliding mode disturbance observer (GSMDO is proposed based on the global sliding mode (GSM control theory. The output of GSMDO is used as the disturbance compensation in control system, which can improve the robust performance of DC motor control system. Based on the designed GSMDO in inner loop, one compound controller, composed of a feedback controller and a feedforward controller, is proposed in order to realize the position tracking of DC motor system. The gains of feedback controller are obtained by means of linear quadratic regulator (LQR optimal control theory. Simulation results present that the proposed control scheme possesses better tracking properties and stronger robustness against modeling errors, parameter variations, and friction moment disturbances. Moreover, its structure is simple; therefore it is easy to be implemented in engineering.

  15. The Optimal Steering Control System using Imperialist Competitive Algorithm on Vehicles with Steer-by-Wire System

    Directory of Open Access Journals (Sweden)

    F. Hunaini

    2015-03-01

    Full Text Available Steer-by-wire is the electrical steering systems on vehicles that are expected with the development of an optimal control system can improve the dynamic performance of the vehicle. This paper aims to optimize the control systems, namely Fuzzy Logic Control (FLC and the Proportional, Integral and Derivative (PID control on the vehicle steering system using Imperialist Competitive Algorithm (ICA. The control systems are built in a cascade, FLC to suppress errors in the lateral motion and the PID control to minimize the error in the yaw motion of the vehicle. FLC is built has two inputs (error and delta error and single output. Each input and output consists of three Membership Function (MF in the form of a triangular for language term "zero" and two trapezoidal for language term "negative" and "positive". In order to work optimally, each MF optimized using ICA to get the position and width of the most appropriate. Likewise, in the PID control, the constant at each Proportional, Integral and Derivative control also optimized using ICA, so there are six parameters of the control system are simultaneously optimized by ICA. Simulations performed on vehicle models with 10 Degree Of Freedom (DOF, the plant input using the variables of steering that expressed in the desired trajectory, and the plant outputs are lateral and yaw motion. The simulation results showed that the FLC-PID control system optimized by using ICA can maintain the movement of vehicle according to the desired trajectory with lower error and higher speed limits than optimized with Particle Swarm Optimization (PSO.

  16. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  17. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  18. Optimizing learning of a locomotor task: amplifying errors as needed.

    Science.gov (United States)

    Marchal-Crespo, Laura; López-Olóriz, Jorge; Jaeger, Lukas; Riener, Robert

    2014-01-01

    Research on motor learning has emphasized that errors drive motor adaptation. Thereby, several researchers have proposed robotic training strategies that amplify movement errors rather than decrease them. In this study, the effect of different robotic training strategies that amplify errors on learning a complex locomotor task was investigated. The experiment was conducted with a one degree-of freedom robotic stepper (MARCOS). Subjects were requested to actively coordinate their legs in a desired gait-like pattern in order to track a Lissajous figure presented on a visual display. Learning with three different training strategies was evaluated: (i) No perturbation: the robot follows the subjects' movement without applying any perturbation, (ii) Error amplification: existing errors were amplified with repulsive forces proportional to errors, (iii) Noise disturbance: errors were evoked with a randomly-varying force disturbance. Results showed that training without perturbations was especially suitable for a subset of initially less-skilled subjects, while error amplification seemed to benefit more skilled subjects. Training with error amplification, however, limited transfer of learning. Random disturbing forces benefited learning and promoted transfer in all subjects, probably because it increased attention. These results suggest that learning a locomotor task can be optimized when errors are randomly evoked or amplified based on subjects' initial skill level.

  19. 4th International Conference on Frontiers in Global Optimization

    CERN Document Server

    Pardalos, Panos

    2004-01-01

    Global Optimization has emerged as one of the most exciting new areas of mathematical programming. Global optimization has received a wide attraction from many fields in the past few years, due to the success of new algorithms for addressing previously intractable problems from diverse areas such as computational chemistry and biology, biomedicine, structural optimization, computer sciences, operations research, economics, and engineering design and control. This book contains refereed invited papers submitted at the 4th international confer­ ence on Frontiers in Global Optimization held at Santorini, Greece during June 8-12, 2003. Santorini is one of the few sites of Greece, with wild beauty created by the explosion of a volcano which is in the middle of the gulf of the island. The mystic landscape with its numerous mult-extrema, was an inspiring location particularly for researchers working on global optimization. The three previous conferences on "Recent Advances in Global Opti­ mization", "State-of-the-...

  20. A Direct Search Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Enrique Baeyens

    2016-06-01

    Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.

  1. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  2. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  3. Estimation of Valve Stiction Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    S. Sivagamasundari

    2011-06-01

    Full Text Available This paper presents a procedure for quantifying valve stiction in control loops based on particle swarm optimization. Measurements of the Process Variable (PV and Controller Output (OP are used to estimate the parameters of a Hammerstein system, consisting of connection of a non linear control valve stiction model and a linear process model. The parameters of the Hammerstein model are estimated using particle swarm optimization, from the input-output data by minimizing the error between the true model output and the identified model output. Using particle swarm optimization, Hammerstein models with known nonlinear structure and unknown parameters can be identified. A cost-effective optimization technique is adopted to find the best valve stiction models representing a more realistic valve behavior in the oscillating loop. Simulation and practical laboratory control system results are included, which demonstrates the effectiveness and robustness of the identification scheme.

  4. On benchmarking Stochastic Global Optimization Algorithms

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Lancinskas, A.

    2015-01-01

    A multitude of heuristic stochastic optimization algorithms have been described in literature to obtain good solutions of the box-constrained global optimization problem often with a limit on the number of used function evaluations. In the larger question of which algorithms behave well on which

  5. Global Optimization of Nonlinear Blend-Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Pedro A. Castillo Castillo

    2017-04-01

    Full Text Available The scheduling of gasoline-blending operations is an important problem in the oil refining industry. This problem not only exhibits the combinatorial nature that is intrinsic to scheduling problems, but also non-convex nonlinear behavior, due to the blending of various materials with different quality properties. In this work, a global optimization algorithm is proposed to solve a previously published continuous-time mixed-integer nonlinear scheduling model for gasoline blending. The model includes blend recipe optimization, the distribution problem, and several important operational features and constraints. The algorithm employs piecewise McCormick relaxation (PMCR and normalized multiparametric disaggregation technique (NMDT to compute estimates of the global optimum. These techniques partition the domain of one of the variables in a bilinear term and generate convex relaxations for each partition. By increasing the number of partitions and reducing the domain of the variables, the algorithm is able to refine the estimates of the global solution. The algorithm is compared to two commercial global solvers and two heuristic methods by solving four examples from the literature. Results show that the proposed global optimization algorithm performs on par with commercial solvers but is not as fast as heuristic approaches.

  6. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  7. First photoelectron timing error evaluation of a new scintillation detector model

    International Nuclear Information System (INIS)

    Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III

    1991-01-01

    In this paper, a general timing system model for a scintillation detector developed is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulate a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. The authors find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, the authors find a low threshold is optimal while for higher energy pulses the optimal threshold increases

  8. First photoelectron timing error evaluation of a new scintillation detector model

    International Nuclear Information System (INIS)

    Petrick, N.; Clinthorne, N.H.; Rogers, W.L.; Hero, A.O. III

    1990-01-01

    In this paper, a general timing system model for a scintillation detector that was developed, is experimentally evaluated. The detector consists of a scintillator and a photodetector such as a photomultiplier tube or an avalanche photodiode. The model uses a Poisson point process to characterize the light output from the scintillator. This timing model was used to simulated a BGO scintillator with a Burle 8575 PMT using first photoelectron timing detection. Evaluation of the model consisted of comparing the RMS error from the simulations with the error from the actual detector system. We find that the general model compares well with the actual error results for the BGO/8575 PMT detector. In addition, the optimal threshold is found to be dependent upon the energy of the scintillation. In the low energy part of the spectrum, we find a low threshold is optimal while for higher energy pulses the optimal threshold increases

  9. Microwave tomography global optimization, parallelization and performance evaluation

    CERN Document Server

    Noghanian, Sima; Desell, Travis; Ashtari, Ali

    2014-01-01

    This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.

  10. Evolutionary global optimization, manifolds and applications

    CERN Document Server

    Aguiar e Oliveira Junior, Hime

    2016-01-01

    This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory....

  11. In-Flight Pitot-Static Calibration

    Science.gov (United States)

    Foster, John V. (Inventor); Cunningham, Kevin (Inventor)

    2016-01-01

    A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.

  12. Investigation, development, and application of optimal output feedback theory. Volume 3: The relationship between dynamic compensators and observers and Kalman filters

    Science.gov (United States)

    Broussard, John R.

    1987-01-01

    Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

  13. Model-data fusion across ecosystems: from multisite optimizations to global simulations

    Science.gov (United States)

    Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.

    2014-11-01

    This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index

  14. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  15. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  16. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  17. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  18. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    Science.gov (United States)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  19. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  20. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  1. Global industrial impact coefficient based on random walk process and inter-country input-output table

    Science.gov (United States)

    Xing, Lizhi; Dong, Xianlei; Guan, Jun

    2017-04-01

    Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

  2. Mechanistic site-based emulation of a global ocean biogeochemical model (MEDUSA 1.0 for parametric analysis and calibration: an application of the Marine Model Optimization Testbed (MarMOT 1.1

    Directory of Open Access Journals (Sweden)

    J. C. P. Hemmings

    2015-03-01

    Full Text Available Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA coupled with a widely used global ocean model (NEMO. A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll

  3. Optimal piston motion for maximum net output work of Daniel cam engines with low heat rejection

    International Nuclear Information System (INIS)

    Badescu, Viorel

    2015-01-01

    Highlights: • The piston motion of low heat rejection compression ignition engines is optimized. • A realistic model taking into account the cooling system is developed. • The optimized cam is smaller for cylinders without thermal insulation. • The optimized cam size depends on ignition moment and cooling process intensity. - Abstract: Compression ignition engines based on classical tapper-crank systems cannot provide optimal piston motion. Cam engines are more appropriate for this purpose. In this paper the piston motion of a Daniel cam engine is optimized. Piston acceleration is taken as a control. The objective is to maximize the net output work during the compression and power strokes. A major research effort has been allocated in the last two decades for the development of low heat rejection engines. A thermally insulated cylinder is considered and a realistic model taking into account the cooling system is developed. The sinusoidal approximation of piston motion in the classical tapper-crank system overestimates the engine efficiency. The exact description of the piston motion in tapper-crank system is used here as a reference. The radiation process has negligible effects during the optimization. The approach with no constraint on piston acceleration is a reasonable approximation. The net output work is much larger (by 12–13%) for the optimized system than for the classical tapper-crank system, for similar thickness of cylinder walls and thermal insulation. Low heat rejection measures are not of significant importance for optimized cam engines. The optimized cam is smaller for a cylinder without thermal insulation than for an insulated cylinder (by up to 8%, depending on the local polar radius). The auto-ignition moment is not a parameter of significant importance for optimized cam engines. However, for given cylinder wall and insulation materials there is an optimum auto-ignition moment which maximizes the net output work. The optimum auto

  4. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  5. Generalized perturbation theory error control within PWR core-loading pattern optimization

    International Nuclear Information System (INIS)

    Imbriani, J.S.; Turinsky, P.J.; Kropaczek, D.J.

    1995-01-01

    The fuel management optimization code FORMOSA-P has been developed to determine the family of near-optimum loading patterns for PWR reactors. The code couples the optimization technique of simulated annealing (SA) with a generalized perturbation theory (GPT) model for evaluating core physics characteristics. To ensure the accuracy of the GPT predictions, as well as to maximize the efficient of the SA search, a GPT error control method has been developed

  6. Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers

    Science.gov (United States)

    Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz

    2017-10-01

    The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.

  7. 3rd World Congress on Global Optimization in Engineering & Science

    CERN Document Server

    Ruan, Ning; Xing, Wenxun; WCGO-III; Advances in Global Optimization

    2015-01-01

    This proceedings volume addresses advances in global optimization—a multidisciplinary research field that deals with the analysis, characterization, and computation of global minima and/or maxima of nonlinear, non-convex, and nonsmooth functions in continuous or discrete forms. The volume contains selected papers from the third biannual World Congress on Global Optimization in Engineering & Science (WCGO), held in the Yellow Mountains, Anhui, China on July 8-12, 2013. The papers fall into eight topical sections: mathematical programming; combinatorial optimization; duality theory; topology optimization; variational inequalities and complementarity problems; numerical optimization; stochastic models and simulation; and complex simulation and supply chain analysis.

  8. A Novel Particle Swarm Optimization Algorithm for Global Optimization.

    Science.gov (United States)

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.

  9. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  10. Vertical bifacial solar farms: Physics, design, and global optimization

    KAUST Repository

    Khan, M. Ryyan

    2017-09-04

    There have been sustained interest in bifacial solar cell technology since 1980s, with prospects of 30–50% increase in the output power from a stand-alone panel. Moreover, a vertical bifacial panel reduces dust accumulation and provides two output peaks during the day, with the second peak aligned to the peak electricity demand. Recent commercialization and anticipated growth of bifacial panel market have encouraged a closer scrutiny of the integrated power-output and economic viability of bifacial solar farms, where mutual shading will erode some of the anticipated energy gain associated with an isolated, single panel. Towards that goal, in this paper we focus on geography-specific optimization of ground-mounted vertical bifacial solar farms for the entire world. For local irradiance, we combine the measured meteorological data with the clear-sky model. In addition, we consider the effects of direct, diffuse, and albedo light. We assume the panel is configured into sub-strings with bypass-diodes. Based on calculated light collection and panel output, we analyze the optimum farm design for maximum yearly output at any given location in the world. Our results predict that, regardless of the geographical location, a vertical bifacial farm will yield 10–20% more energy than a traditional monofacial farm for a practical row-spacing of 2 m (corresponding to 1.2 m high panels). With the prospect of additional 5–20% energy gain from reduced soiling and tilt optimization, bifacial solar farm do offer a viable technology option for large-scale solar energy generation.

  11. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    Science.gov (United States)

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  12. Acceleration techniques in the univariate Lipschitz global optimization

    Science.gov (United States)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  13. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  14. Error characterisation of global active and passive microwave soil moisture datasets

    Directory of Open Access Journals (Sweden)

    W. A. Dorigo

    2010-12-01

    Full Text Available Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT and passive (AMSR-E and SSM/I microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active, the use of different frequencies (C-, X-, and Ku-band for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH. The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different

  15. Global Optimization for Bus Line Timetable Setting Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2014-01-01

    Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.

  16. Optimal design of RTCs in digital circuit fault self-repair based on global signal optimization

    Institute of Scientific and Technical Information of China (English)

    Zhang Junbin; Cai Jinyan; Meng Yafeng

    2016-01-01

    Since digital circuits have been widely and thoroughly applied in various fields, electronic systems are increasingly more complicated and require greater reliability. Faults may occur in elec-tronic systems in complicated environments. If immediate field repairs are not made on the faults, elec-tronic systems will not run normally, and this will lead to serious losses. The traditional method for improving system reliability based on redundant fault-tolerant technique has been unable to meet the requirements. Therefore, on the basis of (evolvable hardware)-based and (reparation balance technology)-based electronic circuit fault self-repair strategy proposed in our preliminary work, the optimal design of rectification circuits (RTCs) in electronic circuit fault self-repair based on global sig-nal optimization is deeply researched in this paper. First of all, the basic theory of RTC optimal design based on global signal optimization is proposed. Secondly, relevant considerations and suitable ranges are analyzed. Then, the basic flow of RTC optimal design is researched. Eventually, a typical circuit is selected for simulation verification, and detailed simulated analysis is made on five circumstances that occur during RTC evolution. The simulation results prove that compared with the conventional design method based RTC, the global signal optimization design method based RTC is lower in hardware cost, faster in circuit evolution, higher in convergent precision, and higher in circuit evolution success rate. Therefore, the global signal optimization based RTC optimal design method applied in the elec-tronic circuit fault self-repair technology is proven to be feasible, effective, and advantageous.

  17. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  18. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  19. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    Science.gov (United States)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  20. Design and optimization of G-band extended interaction klystron with high output power

    Science.gov (United States)

    Li, Renjie; Ruan, Cunjun; Zhang, Huafeng

    2018-03-01

    A ladder-type Extended Interaction Klystron (EIK) with unequal-length slots in the G-band is proposed and designed. The key parameters of resonance cavities working in the π mode are obtained based on the theoretical analysis and 3D simulation. The influence of the device fabrication tolerance on the high-frequency performance is analyzed in detail, and it is found that at least 5 μm of machining precision is required. Thus, the dynamic tuning is required to compensate for the frequency shift and increase the bandwidth. The input and output coupling hole dimensions are carefully designed to achieve high output power along with a broad bandwidth. The effect of surface roughness of the metallic material on the output power has been investigated, and it is proposed that lower surface roughness leads to higher output power. The focusing magnetic field is also optimized to 0.75 T in order to maintain the beam transportation and achieve high output power. With 16.5 kV operating voltage and 0.30 A beam current, the output power of 360 W, the efficiency of 7.27%, the gain of 38.6 dB, and the 3 dB bandwidth of 500 MHz are predicted. The output properties of the EIK show great stability with the effective suppression of oscillation and mode competition. Moreover, small-signal theory analysis and 1D code AJDISK calculations are carried out to verify the results of 3D PIC simulations. A close agreement among the three methods proves the relative validity and the reliability of the designed EIK. Thus, it is indicated that the EIK with unequal-length slots has potential for power improvement and bandwidth extension.

  1. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    Science.gov (United States)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  2. Estimating the approximation error when fixing unessential factors in global sensitivity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sobol' , I.M. [Institute for Mathematical Modelling of the Russian Academy of Sciences, Moscow (Russian Federation); Tarantola, S. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: debora.gatelli@jrc.it; Kucherenko, S.S. [Imperial College London (United Kingdom); Mauntz, W. [Department of Biochemical and Chemical Engineering, Dortmund University (Germany)

    2007-07-15

    One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential factors. A simple function where analytical solutions are available is used to illustrate the theorem. The numerical estimation of small sensitivity indices is discussed.

  3. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  4. A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2015-02-01

    Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

  5. Globally Asymptotic Stability of Stochastic Nonlinear Systems with Time-Varying Delays via Output Feedback Control

    Directory of Open Access Journals (Sweden)

    Mingzhu Song

    2016-01-01

    Full Text Available We address the problem of globally asymptotic stability for a class of stochastic nonlinear systems with time-varying delays. By the backstepping method and Lyapunov theory, we design a linear output feedback controller recursively based on the observable linearization for a class of stochastic nonlinear systems with time-varying delays to guarantee that the closed-loop system is globally asymptotically stable in probability. In particular, we extend the deterministic nonlinear system to stochastic nonlinear systems with time-varying delays. Finally, an example and its simulations are given to illustrate the theoretical results.

  6. Parallel Global Optimization with the Particle Swarm Algorithm (Preprint)

    National Research Council Canada - National Science Library

    Schutte, J. F; Reinbolt, J. A; Fregly, B. J; Haftka, R. T; George, A. D

    2004-01-01

    .... To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm...

  7. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  8. Global optimization of silicon nanowires for efficient parametric processes

    DEFF Research Database (Denmark)

    Vukovic, Dragana; Xu, Jing; Mørk, Jesper

    2013-01-01

    We present a global optimization of silicon nanowires for parametric single-pump mixing. For the first time, the effect of surface roughness-induced loss is included in the analysis, significantly influencing the optimum waveguide dimensions.......We present a global optimization of silicon nanowires for parametric single-pump mixing. For the first time, the effect of surface roughness-induced loss is included in the analysis, significantly influencing the optimum waveguide dimensions....

  9. Competing intelligent search agents in global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Streltsov, S.; Vakili, P. [Boston Univ., MA (United States); Muchnik, I. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.

  10. An Optimized Grey Dynamic Model for Forecasting the Output of High-Tech Industry in China

    Directory of Open Access Journals (Sweden)

    Zheng-Xin Wang

    2014-01-01

    Full Text Available The grey dynamic model by convolution integral with the first-order derivative of the 1-AGO data and n series related, abbreviated as GDMC(1,n, performs well in modelling and forecasting of a grey system. To improve the modelling accuracy of GDMC(1,n, n interpolation coefficients (taken as unknown parameters are introduced into the background values of the n variables. The parameters optimization is formulated as a combinatorial optimization problem and is solved collectively using the particle swarm optimization algorithm. The optimized result has been verified by a case study of the economic output of high-tech industry in China. Comparisons of the obtained modelling results from the optimized GDMC(1,n model with the traditional one demonstrate that the optimal algorithm is a good alternative for parameters optimization of the GDMC(1,n model. The modelling results can assist the government in developing future policies regarding high-tech industry management.

  11. The Sizing and Optimization Language, (SOL): Computer language for design problems

    Science.gov (United States)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  12. Optimal Output of Distributed Generation Based On Complex Power Increment

    Science.gov (United States)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  13. Globally convergent optimization algorithm using conservative convex separable diagonal quadratic approximations

    NARCIS (Netherlands)

    Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.

    2009-01-01

    We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by

  14. Multiple-copy state discrimination: Thinking globally, acting locally

    International Nuclear Information System (INIS)

    Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.; Doherty, A. C.; Bartlett, S. D.

    2011-01-01

    We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.

  15. Interactive Cosegmentation Using Global and Local Energy Optimization

    OpenAIRE

    Xingping Dong,; Jianbing Shen,; Shao, Ling; Yang, Ming-Hsuan

    2015-01-01

    We propose a novel interactive cosegmentation method using global and local energy optimization. The global energy includes two terms: 1) the global scribbled energy and 2) the interimage energy. The first one utilizes the user scribbles to build the Gaussian mixture model and improve the cosegmentation performance. The second one is a global constraint, which attempts to match the histograms of common objects. To minimize the local energy, we apply the spline regression to learn the smoothne...

  16. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    International Nuclear Information System (INIS)

    Vidal-Codina, F.; Nguyen, N.C.; Giles, M.B.; Peraire, J.

    2015-01-01

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basis approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method

  17. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  18. Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers

    Directory of Open Access Journals (Sweden)

    Richard Billich

    2015-03-01

    Full Text Available Optimal Velocity to Achieve Maximum Power Output – Bench Press for Trained Footballers In today’s world of strength training there are many myths surrounding effective exercising with the least possible negative effect on one’s health. In this experiment we focus on the finding of a relationship between maximum output, used load and the velocity with which the exercise is performed. The main objective is to find the optimal speed of the exercise motion which would allow us to reach the maximum mechanic muscle output during a bench press exercise. This information could be beneficial to sporting coaches and recreational sportsmen alike in helping them improve the effectiveness of fast strength training. Fifteen football players of the FK Třinec football club participated in the experiment. The measurements were made with the use of 3D cinematic and dynamic analysis, both experimental methods. The research subjects participated in a strength test, in which the mechanic muscle output of 0, 10, 30, 50, 70, 90% and one repetition maximum (1RM was measured. The acquired result values and other required data were modified using Qualisys Track Manager and Visual 3D software (C-motion, Rockville, MD, USA. During the bench press exercise the maximum mechanic muscle output of the set of research subjects was reached at 75% of maximum exercise motion velocity. Optimální rychlost pohybu pro dosažení maxima výstupního výkonu – bench press u trénovaných fotbalistů Dnešní svět silového tréninku přináší řadu mýtů o tom, jak cvičit efektivně a zároveň s co nejmenším negativním vlivem na zdraví člověka. V tomto experimentu se zabýváme nalezením vztahu mezi maximálním výkonem, použitou zátěží a rychlostí. Hlavním úkolem je nalezení optimální rychlosti pohybu pro dosažení maximálního mechanického svalového výkonu při cvičení bench press, což pomůže nejenom trenérům, ale i rekreačním sportovc

  19. Ring rolling process simulation for geometry optimization

    Science.gov (United States)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  20. Quaternion error-based optimal control applied to pinpoint landing

    Science.gov (United States)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  1. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  2. Quantifying global fossil-fuel CO2 emissions: from OCO-2 to optimal observing designs

    Science.gov (United States)

    Ye, X.; Lauvaux, T.; Kort, E. A.; Oda, T.; Feng, S.; Lin, J. C.; Yang, E. G.; Wu, D.; Kuze, A.; Suto, H.; Eldering, A.

    2017-12-01

    Cities house more than half of the world's population and are responsible for more than 70% of the world anthropogenic CO2 emissions. Therefore, quantifications of emissions from major cities, which are only less than a hundred intense emitting spots across the globe, should allow us to monitor changes in global fossil-fuel CO2 emissions, in an independent, objective way. Satellite platforms provide favorable temporal and spatial coverage to collect urban CO2 data to quantify the anthropogenic contributions to the global carbon budget. We present here the optimal observation design for future NASA's OCO-2 and Japanese GOSAT missions, based on real-data (i.e. OCO-2) experiments and Observing System Simulation Experiments (OSSE's) to address different error components in the urban CO2 budget calculation. We identify the major sources of emission uncertainties for various types of cities with different ecosystems and geographical features, such as urban plumes over flat terrains, accumulated enhancements within basins, and complex weather regimes in coastal areas. Atmospheric transport errors were characterized under various meteorological conditions using the Weather Research and Forecasting (WRF) model at 1-km spatial resolution, coupled to the Open-source Data Inventory for Anthropogenic CO2 (ODIAC) emissions. We propose and discuss the optimized urban sampling strategies to address some difficulties from the seasonality in cloud cover and emissions, vegetation density in and around cities, and address the daytime sampling bias using prescribed diurnal cycles. These factors are combined in pseudo data experiments in which we evaluate the relative impact of uncertainties on inverse estimates of CO2 emissions for cities across latitudinal and climatological zones. We propose here several sampling strategies to minimize the uncertainties in target mode for tracking urban fossil-fuel CO2 emissions over the globe for future satellite missions, such as OCO-3 and future

  3. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  4. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar

    2016-06-01

    Full Text Available Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is selected using Fuzzy, TOPSIS, LINMAP and Shannon’s entropy decision making methods. Triple objective evolutionary approach applied to the proposed model gives power output, thermal efficiency, ecological function as (53.89 kW, 0.1611, −142 kW which are 29.78%, 25.86% and 21.13% lower in comparison with reversible system. Furthermore, the present study reflects the effect of various heat capacitance rates and component efficiencies on triple objectives in graphical custom. Finally, with the aim of error investigation, average and maximum errors of obtained results are computed.

  5. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  6. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  7. Optimal estimation of regional N2O emissions using a three-dimensional global model

    Science.gov (United States)

    Huang, J.; Golombek, A.; Prinn, R.

    2004-12-01

    In this study, we use the MATCH (Model of Atmospheric Transport and Chemistry) model and Kalman filtering techniques to optimally estimate N2O emissions from seven source regions around the globe. The MATCH model was used with NCEP assimilated winds at T62 resolution (192 longitude by 94 latitude surface grid, and 28 vertical levels) from July 1st 1996 to December 31st 2000. The average concentrations of N2O in the lowest four layers of the model were then compared with the monthly mean observations from six national/global networks (AGAGE, CMDL (HATS), CMDL (CCGG), CSIRO, CSIR and NIES), at 48 surface sites. A 12-month-running-mean smoother was applied to both the model results and the observations, due to the fact that the model was not able to reproduce the very small observed seasonal variations. The Kalman filter was then used to solve for the time-averaged regional emissions of N2O for January 1st 1997 to June 30th 2000. The inversions assume that the model stratospheric destruction rates, which lead to a global N2O lifetime of 130 years, are correct. It also assumes normalized emission spatial distributions from each region based on previous studies. We conclude that the global N2O emission flux is about 16.2 TgN/yr, with {34.9±1.7%} from South America and Africa, {34.6±1.5%} from South Asia, {13.9±1.5%} from China/Japan/South East Asia, {8.0±1.9%} from all oceans, {6.4±1.1%} from North America and North and West Asia, {2.6±0.4%} from Europe, and {0.9±0.7%} from New Zealand and Australia. The errors here include the measurement standard deviation, calibration differences among the six groups, grid volume/measurement site mis-match errors estimated from the model, and a procedure to account approximately for the modeling errors.

  8. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  9. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. An optimization-based framework for anisotropic simplex mesh adaptation

    Science.gov (United States)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  11. Correcting errors in a quantum gate with pushed ions via optimal control

    DEFF Research Database (Denmark)

    Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...... of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high...

  12. Test program for NIS calibration to reactor thermal output in HTTR

    International Nuclear Information System (INIS)

    Nakagawa, Shigeaki; Shinozaki, Masayuki; Tachibana, Yukio; Kunitomi, Kazuhiko

    2000-03-01

    Rise-to-power test program for reactor thermal output measurement has been established to calibrate a neutron instrumentation system taking account of the characteristics of the High Temperature Engineering Test Reactor (HTTR). An error of reactor thermal output measurement was evaluated taking account of a configuration of instrumentation system. And the expected dispersion of measurement in the full power operation was evaluated from non-nuclear heat-up of primary coolant up to 213degC. From the evaluation, it was found that an error of reactor thermal output measurement would be less than ±2.0% at the rated power. This report presents the detailed program of rise-to-power test for reactor thermal output measurement and discusses its measurement error. (author)

  13. Reference-shaping adaptive control by using gradient descent optimizers.

    Directory of Open Access Journals (Sweden)

    Baris Baykant Alagoz

    Full Text Available This study presents a model reference adaptive control scheme based on reference-shaping approach. The proposed adaptive control structure includes two optimizer processes that perform gradient descent optimization. The first process is the control optimizer that generates appropriate control signal for tracking of the controlled system output to a reference model output. The second process is the adaptation optimizer that performs for estimation of a time-varying adaptation gain, and it contributes to improvement of control signal generation. Numerical update equations derived for adaptation gain and control signal perform gradient descent optimization in order to decrease the model mismatch errors. To reduce noise sensitivity of the system, a dead zone rule is applied to the adaptation process. Simulation examples show the performance of the proposed Reference-Shaping Adaptive Control (RSAC method for several test scenarios. An experimental study demonstrates application of method for rotor control.

  14. Optimization of the linear induction accelerator construction for maximizing the bremsstrahlung output

    Energy Technology Data Exchange (ETDEWEB)

    Zinchenko, V F; Tulisov, E V; Chlenov, A M; Shiyan, V D [Research Institute of Scientific Instruments, Turaevo-Lytkarino (Russian Federation)

    1997-12-31

    The results of experimental and theoretical optimization of the linear induction accelerator (LIA) design are presented. The major aim of the investigations was to maximize the bremsstrahlung output near the target face. The work was carried out in two stages: l) modernization of the injector module and 2) focusing of the relativistic electron beam (REB) produced at the exit of the accelerating system (AS) in the increasing axial magnetic field. In addition, the methods of diagnostics of angular and energetic parameters of REB based on measurements of radiation dose fields behind the target are described. (author). 2 figs., 4 refs.

  15. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  16. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  17. Toward a more robust variance-based global sensitivity analysis of model outputs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  18. A Novel Hybrid Firefly Algorithm for Global Optimization.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA, is proposed by combining the advantages of both the firefly algorithm (FA and differential evolution (DE. FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA, differential evolution (DE and particle swarm optimization (PSO in the sense of avoiding local minima and increasing the convergence rate.

  19. Correcting errors in a quantum gate with pushed ions via optimal control

    International Nuclear Information System (INIS)

    Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.

  20. Scientific output quality of 40 globally top-ranked medical researchers in the field of osteoporosis.

    Science.gov (United States)

    Pluskiewicz, W; Drozdzowska, B; Adamczyk, P; Noga, K

    2018-03-26

    The study presents the research output of 40 globally top-ranked authors, publishing in the field of osteoporosis. Their h-index is compared with the Scientific Quality Index (SQI), a novel indicator. Using SQI, 92.5% of the authors changed their initial positions in the general ranking. SQI partially depends on bibliometric measures different from those influencing h-index and may be considered as an assessment tool, reflecting more objective, qualitative, rather than quantitative, features of individual scientific output. The study approaches the research output of 40 globally top-ranked authors in the field of osteoporosis. The assessed authors were identified in the Scopus database, using the key word "osteoporosis" and the h-index data, collected during the last decade (2008-2017). The data, concerning the scientific output, expressed by the h-index, were compared with a novel indicator of scientific quality-called the Scientific Quality Index (SQI). SQI is calculated according to the following formula: Parameter No. 1 + Parameter No. 2, where: Parameter No. 1 (the percent of papers cited ≥ 10 times) the number of papers cited ≥ 10 times (excluding self-citations and citations of all co-authors) is divided by the number of all the published papers (including the papers with no citation) × 100%, Parameter No. 2 (the mean number of citations per paper) the total number of citations (excluding self-citations and citations of all co-authors) divided by the number of all published papers (including papers with no citation). The following research output values were obtained: the citation index, 2483.6 ± 1348.7; the total number of papers, 75.1 ± 23.2; the total number of cited papers, 69.3 ± 22.0; the number of papers cited, at least, 10 times, 45.4 ± 17.2; the percent of papers cited, at least, 10 times, 59.9 ± 10.0; and the mean citations per paper, 32.8 ± 15.0. The mean value of Hirsch index was 24.2 ± 6.2 and SQI

  1. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  2. Quadcopter Path Following Control Design Using Output Feedback with Command Generator Tracker LOS Based At Square Path

    Science.gov (United States)

    Nugraha, A. T.; Agustinah, T.

    2018-01-01

    Quadcopter an unstable system, underactuated and nonlinear in quadcopter control research developments become an important focus of attention. In this study, following the path control method for position on the X and Y axis, used structure-Generator Tracker Command (CGT) is tested. Attitude control and position feedback quadcopter is compared using the optimal output. The addition of the H∞ performance optimal output feedback control is used to maintain the stability and robustness of quadcopter. Iterative numerical techniques Linear Matrix Inequality (LMI) is used to find the gain controller. The following path control problems is solved using the method of LQ regulators with output feedback. Simulations show that the control system can follow the paths that have been defined in the form of a reference signal square shape. The result of the simulation suggest that the method which used can bring the yaw angle at the expected value algorithm. Quadcopter can do automatically following path with cross track error mean X=0.5 m and Y=0.2 m.

  3. PARAMETER ESTIMATION OF VALVE STICTION USING ANT COLONY OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    S. Kalaivani

    2012-07-01

    Full Text Available In this paper, a procedure for quantifying valve stiction in control loops based on ant colony optimization has been proposed. Pneumatic control valves are widely used in the process industry. The control valve contains non-linearities such as stiction, backlash, and deadband that in turn cause oscillations in the process output. Stiction is one of the long-standing problems and it is the most severe problem in the control valves. Thus the measurement data from an oscillating control loop can be used as a possible diagnostic signal to provide an estimate of the stiction magnitude. Quantification of control valve stiction is still a challenging issue. Prior to doing stiction detection and quantification, it is necessary to choose a suitable model structure to describe control-valve stiction. To understand the stiction phenomenon, the Stenman model is used. Ant Colony Optimization (ACO, an intelligent swarm algorithm, proves effective in various fields. The ACO algorithm is inspired from the natural trail following behaviour of ants. The parameters of the Stenman model are estimated using ant colony optimization, from the input-output data by minimizing the error between the actual stiction model output and the simulated stiction model output. Using ant colony optimization, Stenman model with known nonlinear structure and unknown parameters can be estimated.

  4. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T

    2005-01-01

    The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...

  5. Theory and Algorithms for Global/Local Design Optimization

    National Research Council Canada - National Science Library

    Haftka, Raphael T

    2004-01-01

    ... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...

  6. Modeling and optimization of a novel two-axis mirror-scanning mechanism driven by piezoelectric actuators

    International Nuclear Information System (INIS)

    Jing, Zijian; Xu, Minglong; Feng, Bo

    2015-01-01

    Mirror-scanning mechanisms are a key component in optical systems for diverse applications. However, the applications of existing piezoelectric scanners are limited due to their small angular travels. To overcome this problem, a novel two-axis mirror-scanning mechanism, which consists of a two-axis tip-tilt flexure mechanism and a set of piezoelectric actuators, is proposed in this paper. The focus of this research is on the design, theoretical modeling, and optimization of the piezoelectric-driven mechanism, with the goal of achieving large angular travels in a compact size. The design of the two-axis tip-tilt flexure mechanism is based on two nonuniform beams, which translate the limited linear output displacements of the piezoelectric actuators into large output angles. To exactly predict the angular travels, we built a voltage-angle model that characterizes the relationship between the input voltages to the piezoelectric actuators and the output angles of the piezoelectric-driven mechanism. Using this analytical model, the optimization is performed to improve the angular travels. A prototype of the mirror-scanning mechanism is fabricated based on the optimization results, and experiments are implemented to test the two-axis output angles. The experimental result shows that the angular travels of the scanner achieve more than 50 mrad, and the error between the analytical model and the experiment is about 11%. This error is much smaller than the error for the model built using the previous method because the influence of the stiffness of the mechanical structure on the deformation of the piezoelectric stack is considered in the voltage-angle model. (paper)

  7. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain

    Science.gov (United States)

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432

  8. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain.

    Science.gov (United States)

    Xing, Lizhi

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.

  9. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain.

    Directory of Open Access Journals (Sweden)

    Lizhi Xing

    Full Text Available The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.

  10. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    Science.gov (United States)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  11. Within and Between Panel Cointegration in the German Regional Output-Trade-FDI Nexus

    DEFF Research Database (Denmark)

    Mitze, Timo

    For spatial data with a sufficiently long time dimension, the concept of global cointegration has been recently included in the econometrics research agenda. Global cointegration arises when non-stationary time series are cointegrated both within and between spatial units. In this paper, we analyze...... the role of globally cointegrated variable relationships using German regional data (NUTS 1 level) for GDP, trade, and FDI activity during the period 1976–2005. Applying various homogeneous and heterogeneous panel data estimators to a Spatial Panel Error Correction Model (SpECM) for regional output growth...... allows us to analyze the short- and long-run impacts of internationalization activities. For the long-run cointegration equation, the empirical results support the hypothesis of export- and FDI-led growth. We also show that for export and outward FDI activity positive cross-regional eff ects are at work...

  12. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

    OpenAIRE

    Zhang, Jiapu

    2011-01-01

    Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

  13. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  14. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Science.gov (United States)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  15. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  16. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  17. Conference on "State of the Art in Global Optimization : Computational Methods and Applications"

    CERN Document Server

    Pardalos, P

    1996-01-01

    Optimization problems abound in most fields of science, engineering, and technology. In many of these problems it is necessary to compute the global optimum (or a good approximation) of a multivariable function. The variables that define the function to be optimized can be continuous and/or discrete and, in addition, many times satisfy certain constraints. Global optimization problems belong to the complexity class of NP-hard prob­ lems. Such problems are very difficult to solve. Traditional descent optimization algorithms based on local information are not adequate for solving these problems. In most cases of practical interest the number of local optima increases, on the aver­ age, exponentially with the size of the problem (number of variables). Furthermore, most of the traditional approaches fail to escape from a local optimum in order to continue the search for the global solution. Global optimization has received a lot of attention in the past ten years, due to the success of new algorithms for solvin...

  18. A practical globalization of one-shot optimization for optimal design of tokamak divertors

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten, E-mail: maarten.blommaert@kuleuven.be [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany); Dekeyser, Wouter; Baelmans, Martine [KU Leuven, Department of Mechanical Engineering, 3001 Leuven (Belgium); Gauger, Nicolas R. [TU Kaiserslautern, Chair for Scientific Computing, 67663 Kaiserslautern (Germany); Reiter, Detlev [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany)

    2017-01-01

    In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes only state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.

  19. Evaluation of dose prediction errors and optimization convergence errors of deliverable-based head-and-neck IMRT plans computed with a superposition/convolution dose algorithm

    International Nuclear Information System (INIS)

    Mihaylov, I. B.; Siebers, J. V.

    2008-01-01

    The purpose of this study is to evaluate dose prediction errors (DPEs) and optimization convergence errors (OCEs) resulting from use of a superposition/convolution dose calculation algorithm in deliverable intensity-modulated radiation therapy (IMRT) optimization for head-and-neck (HN) patients. Thirteen HN IMRT patient plans were retrospectively reoptimized. The IMRT optimization was performed in three sequential steps: (1) fast optimization in which an initial nondeliverable IMRT solution was achieved and then converted to multileaf collimator (MLC) leaf sequences; (2) mixed deliverable optimization that used a Monte Carlo (MC) algorithm to account for the incident photon fluence modulation by the MLC, whereas a superposition/convolution (SC) dose calculation algorithm was utilized for the patient dose calculations; and (3) MC deliverable-based optimization in which both fluence and patient dose calculations were performed with a MC algorithm. DPEs of the mixed method were quantified by evaluating the differences between the mixed optimization SC dose result and a MC dose recalculation of the mixed optimization solution. OCEs of the mixed method were quantified by evaluating the differences between the MC recalculation of the mixed optimization solution and the final MC optimization solution. The results were analyzed through dose volume indices derived from the cumulative dose-volume histograms for selected anatomic structures. Statistical equivalence tests were used to determine the significance of the DPEs and the OCEs. Furthermore, a correlation analysis between DPEs and OCEs was performed. The evaluated DPEs were within ±2.8% while the OCEs were within 5.5%, indicating that OCEs can be clinically significant even when DPEs are clinically insignificant. The full MC-dose-based optimization reduced normal tissue dose by as much as 8.5% compared with the mixed-method optimization results. The DPEs and the OCEs in the targets had correlation coefficients greater

  20. A perturbed martingale approach to global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-08-01

    A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.

  1. A dynamic global and local combined particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Chen Qunxian

    2009-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.

  2. Global output feedback control for a class of high-order feedforward nonlinear systems with input delay.

    Science.gov (United States)

    Zha, Wenting; Zhai, Junyong; Fei, Shumin

    2013-07-01

    This paper investigates the problem of output feedback stabilization for a class of high-order feedforward nonlinear systems with time-varying input delay. First, a scaling gain is introduced into the system under a set of coordinate transformations. Then, the authors construct an observer and controller to make the nominal system globally asymptotically stable. Based on homogeneous domination approach and Lyapunov-Krasovskii functional, it is shown that the closed-loop system can be rendered globally asymptotically stable by the scaling gain. Finally, two simulation examples are provided to illustrate the effectiveness of the proposed scheme. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Performance Optimization of a Solar-Driven Multi-Step Irreversible Brayton Cycle Based on a Multi-Objective Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmadi Mohammad Hosein

    2016-01-01

    Full Text Available An applicable approach for a multi-step regenerative irreversible Brayton cycle on the basis of thermodynamics and optimization of thermal efficiency and normalized output power is presented in this work. In the present study, thermodynamic analysis and a NSGA II algorithm are coupled to determine the optimum values of thermal efficiency and normalized power output for a Brayton cycle system. Moreover, three well-known decision-making methods are employed to indicate definite answers from the outputs gained from the aforementioned approach. Finally, with the aim of error analysis, the values of the average and maximum error of the results are also calculated.

  4. Robust output observer-based control of neutral uncertain systems with discrete and distributed time delays: LMI optimization approach

    International Nuclear Information System (INIS)

    Chen, J.-D.

    2007-01-01

    In this paper, the robust control problem of output dynamic observer-based control for a class of uncertain neutral systems with discrete and distributed time delays is considered. Linear matrix inequality (LMI) optimization approach is used to design the new output dynamic observer-based controls. Three classes of observer-based controls are proposed and the maximal perturbed bound is given. Based on the results of this paper, the constraint of matrix equality is not necessary for designing the observer-based controls. Finally, a numerical example is given to illustrate the usefulness of the proposed method

  5. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  6. Optimization of voltage output of energy harvesters with continuous mechanical rotation extracted from human motion (Conference Presentation)

    Science.gov (United States)

    Rashid, Evan; Hamidi, Armita; Tadesse, Yonas

    2017-04-01

    With increasing popularity of portable devices for outdoor activities, portable energy harvesting devices are coming into spot light. The next generation energy harvester which is called hybrid energy harvester can employ more than one mechanism in a single device to optimize portion of the energy that can be harvested from any source of waste energy namely motion, vibration, heat and etc. In spite of few recent attempts for creating hybrid portable devices, the level of output energy still needs to be improved with the intention of employing them in commercial electronic systems or further applications. Moreover, implementing a practical hybrid energy harvester in different application for further investigation is still challenging. This proposal is projected to incorporate a novel approach to maximize and optimize the voltage output of hybrid energy harvesters to achieve a greater conversion efficiency normalized by the total mass of the hybrid device than the simple arithmetic sum of the individual harvesting mechanisms. The energy harvester model previously proposed by Larkin and Tadesse [1] is used as a baseline and a continuous unidirectional rotation is incorporated to maximize and optimize the output. The device harvest mechanical energy from oscillatory motion and convert it to electrical energy through electromagnetic and piezoelectric systems. The new designed mechanism upgrades the device in a way that can harvest energy from both rotational and linear motions by using magnets. Likewise, the piezoelectric section optimized to harvest at least 10% more energy. To the end, the device scaled down for tested with different sources of vibrations in the immediate environment, including machinery operation, bicycle, door motion while opening and closing and finally, human motions. Comparing the results from literature proved that current device has capability to be employed in commercial small electronic devices for enhancement of battery usage or as a backup

  7. Thermo-economic multi-objective optimization of solar dish-Stirling engine by implementing evolutionary algorithm

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad H.; Sayyaadi, Hoseyn; Mohammadi, Amir H.; Barranco-Jimenez, Marco A.

    2013-01-01

    Highlights: • Thermo-economic multi-objective optimization of solar dish-Stirling engine is studied. • Application of the evolutionary algorithm is investigated. • Error analysis is done to find out the error through investigation. - Abstract: In the recent years, remarkable attention is drawn to Stirling engine due to noticeable advantages, for instance a lot of resources such as biomass, fossil fuels and solar energy can be applied as heat source. Great number of studies are conducted on Stirling engine and finite time thermo-economic is one of them. In the present study, the dimensionless thermo-economic objective function, thermal efficiency and dimensionless power output are optimized for a dish-Stirling system using finite time thermo-economic analysis and NSGA-II algorithm. Optimized answers are chosen from the results using three decision-making methods. Error analysis is done to find out the error through investigation

  8. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    Energy Technology Data Exchange (ETDEWEB)

    Ford, E; Phillips, M; Bojechko, C [University of Washington, Seattle, WA (United States)

    2015-06-15

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPS dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.

  9. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  10. A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems

    Directory of Open Access Journals (Sweden)

    Leilei Cao

    2016-01-01

    Full Text Available A Guiding Evolutionary Algorithm (GEA with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.

  11. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...

  12. Constrained Optimization of MIMO Training Sequences

    Directory of Open Access Journals (Sweden)

    Coon Justin P

    2007-01-01

    Full Text Available Multiple-input multiple-output (MIMO systems have shown a huge potential for increased spectral efficiency and throughput. With an increasing number of transmitting antennas comes the burden of providing training for channel estimation for coherent detection. In some special cases optimal, in the sense of mean-squared error (MSE, training sequences have been designed. However, in many practical systems it is not feasible to analytically find optimal solutions and numerical techniques must be used. In this paper, two systems (unique word (UW single carrier and OFDM with nulled subcarriers are considered and a method of designing near-optimal training sequences using nonlinear optimization techniques is proposed. In particular, interior-point (IP algorithms such as the barrier method are discussed. Although the two systems seem unrelated, the cost function, which is the MSE of the channel estimate, is shown to be effectively the same for each scenario. Also, additional constraints, such as peak-to-average power ratio (PAPR, are considered and shown to be easily included in the optimization process. Numerical examples illustrate the effectiveness of the designed training sequences, both in terms of MSE and bit-error rate (BER.

  13. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  14. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  15. Generation of Articulated Mechanisms by Optimization Techniques

    DEFF Research Database (Denmark)

    Kawamoto, Atsushi

    2004-01-01

    optimization [Paper 2] 3. Branch and bound global optimization [Paper 3] 4. Path-generation problems [Paper 4] In terms of the objective of the articulated mechanism design problems, the first to third papers deal with maximization of output displacement, while the fourth paper solves prescribed path...... generation problems. From a mathematical programming point of view, the methods proposed in the first and third papers are categorized as deterministic global optimization, while those of the second and fourth papers are categorized as gradient-based local optimization. With respect to design variables, only...... directly affects the result of the associated sensitivity analysis. Another critical issue for mechanism design is the concept of mechanical degrees of freedom and this should be also considered for obtaining a proper articulated mechanism. The thesis treats this inherently discrete criterion in some...

  16. Nonlinear observer output-feedback MPC treatment scheduling for HIV

    Directory of Open Access Journals (Sweden)

    Zurakowski Ryan

    2011-05-01

    Full Text Available Abstract Background Mathematical models of the immune response to the Human Immunodeficiency Virus demonstrate the potential for dynamic schedules of Highly Active Anti-Retroviral Therapy to enhance Cytotoxic Lymphocyte-mediated control of HIV infection. Methods In previous work we have developed a model predictive control (MPC based method for determining optimal treatment interruption schedules for this purpose. In this paper, we introduce a nonlinear observer for the HIV-immune response system and an integrated output-feedback MPC approach for implementing the treatment interruption scheduling algorithm using the easily available viral load measurements. We use Monte-Carlo approaches to test robustness of the algorithm. Results The nonlinear observer shows robust state tracking while preserving state positivity both for continuous and discrete measurements. The integrated output-feedback MPC algorithm stabilizes the desired steady-state. Monte-Carlo testing shows significant robustness to modeling error, with 90% success rates in stabilizing the desired steady-state with 15% variance from nominal on all model parameters. Conclusions The possibility of enhancing immune responsiveness to HIV through dynamic scheduling of treatment is exciting. Output-feedback Model Predictive Control is uniquely well-suited to solutions of these types of problems. The unique constraints of state positivity and very slow sampling are addressable by using a special-purpose nonlinear state estimator, as described in this paper. This shows the possibility of using output-feedback MPC-based algorithms for this purpose.

  17. The impact on chinese economic growth and energy consumption of the Global Financial Crisis: An input-output analysis

    International Nuclear Information System (INIS)

    Yuan, Chaoqing; Liu, Sifeng; Xie, Naiming

    2010-01-01

    The dependence on foreign trade increased sharply in China, and therefore Chinese economy is obviously export-oriented. The Global Financial Crisis will impact the Chinese economic growth violently. Chinese government has recently adopted some effective measures to fight against the Global Financial Crisis. The most important measure is the 4 trillion Yuan ($586 billion) stimulus plan which was announced on November 9, 2008. This paper discusses the influence on energy consumption and economic growth of Global Financial Crisis and the stimulus plan against it by input-output analysis. The results show that the fall of exports caused by the Global Financial Crisis will lead to a decrease of 7.33% in GDP (Gross Domestic Production) and a reduction of 9.21% in energy consumption; the stimulus plan against the Global Financial Crisis will lead to an increase of 4.43% in economic growth and an increase of 1.83% in energy consumption; In the Global Financial Crisis, energy consumption per unit GDP will fall in China. (author)

  18. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    Science.gov (United States)

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2017-01-01

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  19. Global optimization for overall HVAC systems - Part I problem formulation and analysis

    International Nuclear Information System (INIS)

    Lu Lu; Cai Wenjian; Chai, Y.S.; Xie Lihua

    2005-01-01

    This paper presents the global optimization technologies for overall heating, ventilating and air conditioning (HVAC) systems. The objective function of global optimization and constraints are formulated based on mathematical models of the major components. All these models are associated with power consumption components and heat exchangers for transferring cooling load. The characteristics of all the major components are briefly introduced by models, and the interactions between them are analyzed and discussed to show the complications of the problem. According to the characteristics of the operating components, the complicated original optimization problem for overall HVAC systems is transformed and simplified into a compact form ready for optimization

  20. An Improved Fuzzy Logic Controller Design for PV Inverters Utilizing Differential Search Optimization

    Directory of Open Access Journals (Sweden)

    Ammar Hussein Mutlag

    2014-01-01

    Full Text Available This paper presents an adaptive fuzzy logic controller (FLC design technique for photovoltaic (PV inverters using differential search algorithm (DSA. This technique avoids the exhaustive traditional trial and error procedure in obtaining membership functions (MFs used in conventional FLCs. This technique is implemented during the inverter design phase by generating adaptive MFs based on the evaluation results of the objective function formulated by the DSA. In this work, the mean square error (MSE of the inverter output voltage is used as an objective function. The DSA optimizes the MFs such that the inverter provides the lowest MSE for output voltage and improves the performance of the PV inverter output in terms of amplitude and frequency. The design procedure and accuracy of the optimum FLC are illustrated and investigated using simulations conducted for a 3 kW three-phase inverter in a MATLAB/Simulink environment. Results show that the proposed controller can successfully obtain the desired output when different linear and nonlinear loads are connected to the system. Furthermore, the inverter has reasonably low steady state error and fast response to reference variation.

  1. Quaternion-based adaptive output feedback attitude control of spacecraft using Chebyshev neural networks.

    Science.gov (United States)

    Zou, An-Min; Dev Kumar, Krishna; Hou, Zeng-Guang

    2010-09-01

    This paper investigates the problem of output feedback attitude control of an uncertain spacecraft. Two robust adaptive output feedback controllers based on Chebyshev neural networks (CNN) termed adaptive neural networks (NN) controller-I and adaptive NN controller-II are proposed for the attitude tracking control of spacecraft. The four-parameter representations (quaternion) are employed to describe the spacecraft attitude for global representation without singularities. The nonlinear reduced-order observer is used to estimate the derivative of the spacecraft output, and the CNN is introduced to further improve the control performance through approximating the spacecraft attitude motion. The implementation of the basis functions of the CNN used in the proposed controllers depends only on the desired signals, and the smooth robust compensator using the hyperbolic tangent function is employed to counteract the CNN approximation errors and external disturbances. The adaptive NN controller-II can efficiently avoid the over-estimation problem (i.e., the bound of the CNNs output is much larger than that of the approximated unknown function, and hence, the control input may be very large) existing in the adaptive NN controller-I. Both adaptive output feedback controllers using CNN can guarantee that all signals in the resulting closed-loop system are uniformly ultimately bounded. For performance comparisons, the standard adaptive controller using the linear parameterization of spacecraft attitude motion is also developed. Simulation studies are presented to show the advantages of the proposed CNN-based output feedback approach over the standard adaptive output feedback approach.

  2. Optimal Error Estimates of Two Mixed Finite Element Methods for Parabolic Integro-Differential Equations with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2013-05-01

    In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.

  3. Estimation of the Carbon Footprint and Global Warming Potential in Rice Production Systems

    International Nuclear Information System (INIS)

    Dastan, S.; Soltani, F.; Noormohamadi, G.; Madani, H.; Yadi, R.

    2016-01-01

    Optimal management approaches can be adopted in order to increase crop productivity and lower the carbon footprint of grain products. The objective of this study was to estimate the carbon (C) footprint and global warming potential of rice production systems. In this experiment, rice production systems (including SRI, improved and conventional) were studied. All activities, field operations and data in production methods and at different input rates were monitored and recorded during 2012. Results showed that average global warming potential across production systems was equal to 2803.25 kg CO 2 -eq ha-1. The highest and least global warming potential were observed in the SRI and conventional systems, respectively. global warming potential per unit energy input was the least and most in SRI and conventional systems, respectively. Also, the SRI and conventional systems had the maximum and minimum global warming potential per unit energy output, respectively. SRI and conventional system had the greatest and least global warming potential per unit energy output, respectively. Therefore, the optimal management approach found in SRI resulted in a reduction in GHGs, global warming potential and the carbon footprint.

  4. Neoliberal Optimism: Applying Market Techniques to Global Health.

    Science.gov (United States)

    Mei, Yuyang

    2017-01-01

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  5. Some Insights of Spectral Optimization in Ocean Color Inversion

    Science.gov (United States)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  6. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  7. OPTIMAL practice conditions enhance the benefits of gradually increasing error opportunities on retention of a stepping sequence task.

    Science.gov (United States)

    Levac, Danielle; Driscoll, Kate; Galvez, Jessica; Mercado, Kathleen; O'Neil, Lindsey

    2017-12-01

    Physical therapists should implement practice conditions that promote motor skill learning after neurological injury. Errorful and errorless practice conditions are effective for different populations and tasks. Errorful learning provides opportunities for learners to make task-relevant choices. Enhancing learner autonomy through choice opportunities is a key component of the Optimizing Performance through Intrinsic Motivation and Attention for Learning (OPTIMAL) theory of motor learning. The objective of this study was to evaluate the interaction between error opportunity frequency and OPTIMAL (autonomy-supportive) practice conditions during stepping sequence acquisition in a virtual environment. Forty healthy young adults were randomized to autonomy-supportive or autonomy-controlling practice conditions, which differed in instructional language, focus of attention (external vs internal) and positive versus negative nature of verbal and visual feedback. All participants practiced 40 trials of 4, six-step stepping sequences in a random order. Each of the 4 sequences offered different amounts of choice opportunities about the next step via visual cue presentation (4 choices; 1 choice; gradually increasing [1-2-3-4] choices, and gradually decreasing [4-3-2-1] choices). Motivation and engagement were measured by the Intrinsic Motivation Inventory (IMI) and the User Engagement Scale (UES). Participants returned 1-3 days later for retention tests, where learning was measured by time to complete each sequence. No choice cues were offered on retention. Participants in the autonomy-supportive group outperformed the autonomy-controlling group at retention on all sequences (mean difference 2.88s, p errorful (4 choice) sequence (p error opportunities over time, suggest that participants relied on implicit learning strategies for this full body task and that feedback about successes minimized errors and reduced their potential information-processing benefits. Subsequent

  8. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  9. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  10. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  11. Global blending optimization of laminated composites with discrete material candidate selection and thickness variation

    DEFF Research Database (Denmark)

    Sørensen, Søren N.; Stolpe, Mathias

    2015-01-01

    rate. The capabilities of the method and the effect of active versus inactive manufacturing constraints are demonstrated on several numerical examples of limited size, involving at most 320 binary variables. Most examples are solved to guaranteed global optimality and may constitute benchmark examples...... but is, however, convex in the original mixed binary nested form. Convexity is the foremost important property of optimization problems, and the proposed method can guarantee the global or near-global optimal solution; unlike most topology optimization methods. The material selection is limited...... for popular topology optimization methods and heuristics based on solving sequences of non-convex problems. The results will among others demonstrate that the difficulty of the posed problem is highly dependent upon the composition of the constitutive properties of the material candidates....

  12. Robust Adaptive Beamforming with Sensor Position Errors Using Weighted Subspace Fitting-Based Covariance Matrix Reconstruction.

    Science.gov (United States)

    Chen, Peng; Yang, Yixin; Wang, Yong; Ma, Yuanliang

    2018-05-08

    When sensor position errors exist, the performance of recently proposed interference-plus-noise covariance matrix (INCM)-based adaptive beamformers may be severely degraded. In this paper, we propose a weighted subspace fitting-based INCM reconstruction algorithm to overcome sensor displacement for linear arrays. By estimating the rough signal directions, we construct a novel possible mismatched steering vector (SV) set. We analyze the proximity of the signal subspace from the sample covariance matrix (SCM) and the space spanned by the possible mismatched SV set. After solving an iterative optimization problem, we reconstruct the INCM using the estimated sensor position errors. Then we estimate the SV of the desired signal by solving an optimization problem with the reconstructed INCM. The main advantage of the proposed algorithm is its robustness against SV mismatches dominated by unknown sensor position errors. Numerical examples show that even if the position errors are up to half of the assumed sensor spacing, the output signal-to-interference-plus-noise ratio is only reduced by 4 dB. Beam patterns plotted using experiment data show that the interference suppression capability of the proposed beamformer outperforms other tested beamformers.

  13. Global myeloma research clusters, output, and citations: a bibliometric mapping and clustering analysis.

    Directory of Open Access Journals (Sweden)

    Jens Peter Andersen

    Full Text Available International collaborative research is a mechanism for improving the development of disease-specific therapies and for improving health at the population level. However, limited data are available to assess the trends in research output related to orphan diseases.We used bibliometric mapping and clustering methods to illustrate the level of fragmentation in myeloma research and the development of collaborative efforts. Publication data from Thomson Reuters Web of Science were retrieved for 2005-2009 and followed until 2013. We created a database of multiple myeloma publications, and we analysed impact and co-authorship density to identify scientific collaborations, developments, and international key players over time. The global annual publication volume for studies on multiple myeloma increased from 1,144 in 2005 to 1,628 in 2009, which represents a 43% increase. This increase is high compared to the 24% and 14% increases observed for lymphoma and leukaemia. The major proportion (>90% of publications was from the US and EU over the study period. The output and impact in terms of citations, identified several successful groups with a large number of intra-cluster collaborations in the US and EU. The US-based myeloma clusters clearly stand out as the most productive and highly cited, and the European Myeloma Network members exhibited a doubling of collaborative publications from 2005 to 2009, still increasing up to 2013.Multiple myeloma research output has increased substantially in the past decade. The fragmented European myeloma research activities based on national or regional groups are progressing, but they require a broad range of targeted research investments to improve multiple myeloma health care.

  14. On the Optimal Detection and Error Performance Analysis of the Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    The conventional minimum Euclidean distance (MED) receiver design is based on the assumption of ideal hardware transceivers and proper Gaussian noise in communication systems. Throughout this study, an accurate statistical model of various hardware impairments (HWIs) is presented. Then, an optimal maximum likelihood (ML) receiver is derived considering the distinct characteristics of the HWIs comprised of additive improper Gaussian noise and signal distortion. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds are derived. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver and the tightness of the derived bounds.

  15. On the Optimal Detection and Error Performance Analysis of the Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2018-01-15

    The conventional minimum Euclidean distance (MED) receiver design is based on the assumption of ideal hardware transceivers and proper Gaussian noise in communication systems. Throughout this study, an accurate statistical model of various hardware impairments (HWIs) is presented. Then, an optimal maximum likelihood (ML) receiver is derived considering the distinct characteristics of the HWIs comprised of additive improper Gaussian noise and signal distortion. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds are derived. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver and the tightness of the derived bounds.

  16. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    Science.gov (United States)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  17. Practical synchronization on complex dynamical networks via optimal pinning control

    Science.gov (United States)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  18. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    Science.gov (United States)

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  19. Adaptive fuzzy dynamic surface control of nonlinear systems with input saturation and time-varying output constraints

    Science.gov (United States)

    Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.

    2018-02-01

    This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.

  20. Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2013-01-01

    We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.

  1. Statistical distributions of optimal global alignment scores of random protein sequences

    Directory of Open Access Journals (Sweden)

    Tang Jiaowei

    2005-10-01

    Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.

  2. An Efficient Approach for Energy Consumption Optimization and Management in Residential Building Using Artificial Bee Colony and Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Fazli Wahid

    2016-01-01

    Full Text Available The energy management in residential buildings according to occupant’s requirement and comfort is of vital importance. There are many proposals in the literature addressing the issue of user’s comfort and energy consumption (management with keeping different parameters in consideration. In this paper, we have utilized artificial bee colony (ABC optimization algorithm for maximizing user comfort and minimizing energy consumption simultaneously. We propose a complete user friendly and energy efficient model with different components. The user set parameters and the environmental parameters are inputs of the ABC, and the optimized parameters are the output of the ABC. The error differences between the environmental parameters and the ABC optimized parameters are inputs of fuzzy controllers, which give the required energy as the outputs. The purpose of the optimization algorithm is to maximize the comfort index and minimize the error difference between the user set parameters and the environmental parameters, which ultimately decreases the power consumption. The experimental results show that the proposed model is efficient in achieving high comfort index along with minimized energy consumption.

  3. Towards a global multi-regional environmentally extended input-output database

    NARCIS (Netherlands)

    Tukker, Arnold; Poliakov, Evgueni; Heijungs, Reinout; Hawkins, Troy; Neuwahl, Frederik; Rueda-Cantuche, Jose M.; Giljum, Stefan; Moll, Stephan; Oosterhaven, Jan; Bouwmeester, Maaike

    2009-01-01

    This paper presents the strategy for a large EU-funded Integrated Project: EXIOPOL ("A New Environmental Accounting Framework Using Externality Data and Input-Output Tools for Policy Analysis"), with special attention for its part in environmentally extended (EE) input-output (IO) analysis. The

  4. Application of surrogate-based global optimization to aerodynamic design

    CERN Document Server

    Pérez, Esther

    2016-01-01

    Aerodynamic design, like many other engineering applications, is increasingly relying on computational power. The growing need for multi-disciplinarity and high fidelity in design optimization for industrial applications requires a huge number of repeated simulations in order to find an optimal design candidate. The main drawback is that each simulation can be computationally expensive – this becomes an even bigger issue when used within parametric studies, automated search or optimization loops, which typically may require thousands of analysis evaluations. The core issue of a design-optimization problem is the search process involved. However, when facing complex problems, the high-dimensionality of the design space and the high-multi-modality of the target functions cannot be tackled with standard techniques. In recent years, global optimization using meta-models has been widely applied to design exploration in order to rapidly investigate the design space and find sub-optimal solutions. Indeed, surrogat...

  5. External force back-projective composition and globally deformable optimization for 3-D coronary artery reconstruction

    International Nuclear Information System (INIS)

    Yang, Jian; Cong, Weijian; Fan, Jingfan; Liu, Yue; Wang, Yongtian; Chen, Yang

    2014-01-01

    The clinical value of the 3D reconstruction of a coronary artery is important for the diagnosis and intervention of cardiovascular diseases. This work proposes a method based on a deformable model for reconstructing coronary arteries from two monoplane angiographic images acquired from different angles. First, an external force back-projective composition model is developed to determine the external force, for which the force distributions in different views are back-projected to the 3D space and composited in the same coordinate system based on the perspective projection principle of x-ray imaging. The elasticity and bending forces are composited as an internal force to maintain the smoothness of the deformable curve. Second, the deformable curve evolves rapidly toward the true vascular centerlines in 3D space and angiographic images under the combination of internal and external forces. Third, densely matched correspondence among vessel centerlines is constructed using a curve alignment method. The bundle adjustment method is then utilized for the global optimization of the projection parameters and the 3D structures. The proposed method is validated on phantom data and routine angiographic images with consideration for space and re-projection image errors. Experimental results demonstrate the effectiveness and robustness of the proposed method for the reconstruction of coronary arteries from two monoplane angiographic images. The proposed method can achieve a mean space error of 0.564 mm and a mean re-projection error of 0.349 mm. (paper)

  6. Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem

    Directory of Open Access Journals (Sweden)

    Xiaomei Zhang

    2012-01-01

    Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.

  7. Multiple shooting applied to robust reservoir control optimization including output constraints on coherent risk measures

    DEFF Research Database (Denmark)

    Codas, Andrés; Hanssen, Kristian G.; Foss, Bjarne

    2017-01-01

    The production life of oil reservoirs starts under significant uncertainty regarding the actual economical return of the recovery process due to the lack of oil field data. Consequently, investors and operators make management decisions based on a limited and uncertain description of the reservoir....... In this work, we propose a new formulation for robust optimization of reservoir well controls. It is inspired by the multiple shooting (MS) method which permits a broad range of parallelization opportunities and output constraint handling. This formulation exploits coherent risk measures, a concept...

  8. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  9. Error handling for the CDF online silicon vertex tracker

    CERN Document Server

    Bari, M; Cerri, A; Dell'Orso, Mauro; Donati, S; Galeotti, S; Giannetti, P; Morsani, F; Punzi, G; Ristori, L; Spinella, F; Zanetti, A M

    2001-01-01

    The online silicon vertex tracker (SVT) is composed of 104 VME 9U digital boards (of eight different types). Since the data output from the SVT (few MB/s) are a small fraction of the input data (200 MB/s), it is extremely difficult to track possible internal errors by using only the output stream. For this reason, several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams, and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named spy buffers, which act as built-in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be frozen at any time (e.g., on error detection) to take a snapshot of all data flowing through each SVT board. The spy buffers are coordinated at system level by the Spy Control Board. The architecture, design, and implementation of this system are described. (4 refs).

  10. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  11. Smart photodetector arrays for error control in page-oriented optical memory

    Science.gov (United States)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data

  12. Optimal design for the output sensitivity of a binary-optics beam splitter

    International Nuclear Information System (INIS)

    Chen Ran; Guo Yongkang; Yao Jun

    1998-01-01

    The authors use differential-integral algorithm for optimal design of the binary-optics beam splitter. Though the simulate result the authors can see, splitter designed by this method, when the shape and the intensity of the input changes, the output will keep relatively stable. The designed diffraction efficiency achieves 92.67%, and the nonuniformity of the intensity is less than 0.002%. When the input changes from a Gaussian to a paranormal Gaussian or a rectangular facula with tiny random undulation and a plane wave, the diffraction efficiency can reach 89.60% at least, and the highest nonuniformity of the intensity is 11.49%. Consider about both the diffraction efficiency and the nonuniformity of the intensity, this result is better than that has been reported. The scientists in the world show interest in the using of binary-optics device in ICF driver

  13. A global optimization method for evaporative cooling systems based on the entransy theory

    International Nuclear Information System (INIS)

    Yuan, Fang; Chen, Qun

    2012-01-01

    Evaporative cooling technique, one of the most widely used methods, is essential to both energy conservation and environment protection. This contribution introduces a global optimization method for indirect evaporative cooling systems with coupled heat and mass transfer processes based on the entransy theory to improve their energy efficiency. First, we classify the irreversible processes in the system into the heat transfer process, the coupled heat and mass transfer process and the mixing process of waters in different branches, where the irreversibility is evaluated by the entransy dissipation. Then through the total system entransy dissipation, we establish the theoretical relationship of the user demands with both the geometrical structures of each heat exchanger and the operating parameters of each fluid, and derive two optimization equation groups focusing on two typical optimization problems. Finally, an indirect evaporative cooling system is taken as an example to illustrate the applications of the newly proposed optimization method. It is concluded that there exists an optimal circulating water flow rate with the minimum total thermal conductance of the system. Furthermore, with different user demands and moist air inlet conditions, it is the global optimization, other than parametric analysis, will obtain the optimal performance of the system. -- Highlights: ► Introduce a global optimization method for evaporative cooling systems. ► Establish the direct relation between user demands and the design parameters. ► Obtain two groups of optimization equations for two typical optimization objectives. ► Solving the equations offers the optimal design parameters for the system. ► Provide the instruction for the design of coupled heat and mass transfer systems.

  14. Self-adaptive global best harmony search algorithm applied to reactor core fuel management optimization

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.

    2013-01-01

    Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems

  15. Global-local optimization of flapping kinematics in hovering flight

    KAUST Repository

    Ghommem, Mehdi; Hajj, M. R.; Mook, Dean T.; Stanford, Bret K.; Bé ran, Philip S.; Watson, Layne T.

    2013-01-01

    The kinematics of a hovering wing are optimized by combining the 2-d unsteady vortex lattice method with a hybrid of global and local optimization algorithms. The objective is to minimize the required aerodynamic power under a lift constraint. The hybrid optimization is used to efficiently navigate the complex design space due to wing-wake interference present in hovering aerodynamics. The flapping wing is chosen so that its chord length and flapping frequency match the morphological and flight properties of two insects with different masses. The results suggest that imposing a delay between the different oscillatory motions defining the flapping kinematics, and controlling the way through which the wing rotates at the end of each half stroke can improve aerodynamic power under a lift constraint. Furthermore, our optimization analysis identified optimal kinematics that agree fairly well with observed insect kinematics, as well as previously published numerical results.

  16. Global-local optimization of flapping kinematics in hovering flight

    KAUST Repository

    Ghommem, Mehdi

    2013-06-01

    The kinematics of a hovering wing are optimized by combining the 2-d unsteady vortex lattice method with a hybrid of global and local optimization algorithms. The objective is to minimize the required aerodynamic power under a lift constraint. The hybrid optimization is used to efficiently navigate the complex design space due to wing-wake interference present in hovering aerodynamics. The flapping wing is chosen so that its chord length and flapping frequency match the morphological and flight properties of two insects with different masses. The results suggest that imposing a delay between the different oscillatory motions defining the flapping kinematics, and controlling the way through which the wing rotates at the end of each half stroke can improve aerodynamic power under a lift constraint. Furthermore, our optimization analysis identified optimal kinematics that agree fairly well with observed insect kinematics, as well as previously published numerical results.

  17. Uncertainties in global radiation time series forecasting using machine learning: The multilayer perceptron case

    International Nuclear Information System (INIS)

    Voyant, Cyril; Notton, Gilles; Darras, Christophe; Fouilloy, Alexis; Motte, Fabrice

    2017-01-01

    As global solar radiation forecasting is a very important challenge, several methods are devoted to this goal with different levels of accuracy and confidence. In this study we propose to better understand how the uncertainty is propagated in the context of global radiation time series forecasting using machine learning. Indeed we propose to decompose the error considering four kinds of uncertainties: the error due to the measurement, the variability of time series, the machine learning uncertainty and the error related to the horizon. All these components of the error allow to determinate a global uncertainty generating prediction bands related to the prediction efficiency. We also have defined a reliability index which could be very interesting for the grid manager in order to estimate the validity of predictions. We have experimented this method on a multilayer perceptron which is a popular machine learning technique. We have shown that the global error and its components are essential to quantify in order to estimate the reliability of the model outputs. The described method has been successfully applied to four meteorological stations in Mediterranean area. - Highlights: • Solar irradiation predictions require confidence bands. • There are a lot of kinds of uncertainties to take into account in order to propose prediction bands. • the ranking of different kinds of uncertainties is essential to propose an operational tool for the grid managers.

  18. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  19. A hybrid method for forecasting the energy output of photovoltaic systems

    International Nuclear Information System (INIS)

    Ramsami, Pamela; Oree, Vishwamitra

    2015-01-01

    Highlights: • We propose a novel hybrid technique for predicting the daily PV energy output. • Multiple linear regression, FFNN and GRNN artificial neural networks are used. • Stepwise regression is used to select the most relevant meteorological parameters. • SR-FFNN reduces the average dispersion and overall bias in prediction errors. • Accuracy metrics of hybrid models are better than those of single-stage models. - Abstract: The intermittent nature of solar energy poses many challenges to renewable energy system operators in terms of operational planning and scheduling. Predicting the output of photovoltaic systems is therefore essential for managing the operation and assessing the economic performance of power systems. This paper presents a new technique for forecasting the 24-h ahead stochastic energy output of photovoltaic systems based on the daily weather forecasts. A comparison of the performances of the hybrid technique with conventional linear regression and artificial neural network models has also been reported. Initially, three single-stage models were designed, namely the generalized regression neural network, feedforward neural network and multiple linear regression. Subsequently, a hybrid-modeling approach was adopted by applying stepwise regression to select input variables of greater importance. These variables were then fed to the single-stage models resulting in three hybrid models. They were then validated by comparing the forecasts of the models with measured dataset from an operational photovoltaic system. The accuracy of the each model was evaluated based on the correlation coefficient, mean absolute error, mean bias error and root mean square error values. Simulation results revealed that the hybrid models perform better than their corresponding single-stage models. Stepwise regression-feedforward neural network hybrid model outperformed the other models with root mean square error, mean absolute error, mean bias error and

  20. Globalization and changing trends of biomedical research output.

    Science.gov (United States)

    Conte, Marisa L; Liu, Jing; Schnell, Santiago; Omary, M Bishr

    2017-06-15

    The US continues to lead the world in research and development (R&D) expenditures, but there is concern that stagnation in federal support for biomedical research in the US could undermine the leading role the US has played in biomedical and clinical research discoveries. As a readout of research output in the US compared with other countries, assessment of original research articles published by US-based authors in ten clinical and basic science journals during 2000 to 2015 showed a steady decline of articles in high-ranking journals or no significant change in mid-ranking journals. In contrast, publication output originating from China-based investigators, in both high- and mid-ranking journals, has steadily increased commensurate with significant growth in R&D expenditures. These observations support the current concerns of stagnant and year-to-year uncertainty in US federal funding of biomedical research.

  1. Global Optimization Employing Gaussian Process-Based Bayesian Surrogates

    Directory of Open Access Journals (Sweden)

    Roland Preuss

    2018-03-01

    Full Text Available The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach.

  2. An Evaluation of the Sniffer Global Optimization Algorithm Using Standard Test Functions

    Science.gov (United States)

    Butler, Roger A. R.; Slaminka, Edward E.

    1992-03-01

    The performance of Sniffer—a new global optimization algorithm—is compared with that of Simulated Annealing. Using the number of function evaluations as a measure of efficiency, the new algorithm is shown to be significantly better at finding the global minimum of seven standard test functions. Several of the test functions used have many local minima and very steep walls surrounding the global minimum. Such functions are intended to thwart global minimization algorithms.

  3. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  4. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Science.gov (United States)

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Science.gov (United States)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  6. Global cardiovascular research output, citations, and collaborations: a time-trend, bibliometric analysis (1999-2008).

    Science.gov (United States)

    Huffman, Mark D; Baldridge, Abigail; Bloomfield, Gerald S; Colantonio, Lisandro D; Prabhakaran, Poornima; Ajay, Vamadevan S; Suh, Sarah; Lewison, Grant; Prabhakaran, Dorairaj

    2013-01-01

    Health research is one mechanism to improve population-level health and should generally match the health needs of populations. However, there have been limited data to assess the trends in national-level cardiovascular research output, even as cardiovascular disease [CVD] has become the leading cause of morbidity and mortality worldwide. We performed a time trends analysis of cardiovascular research publications (1999-2008) downloaded from Web of Knowledge using a iteratively-tested cardiovascular bibliometric filter with >90% precision and recall. We evaluated cardiovascular research publications, five-year running actual citation indices [ACIs], and degree of international collaboration measured through the ratio of the fractional count of addresses from one country against all addresses for each publication. Global cardiovascular publication volume increased from 40 661 publications in 1999 to 55 284 publications in 2008, which represents a 36% increase. The proportion of cardiovascular publications from high-income, Organization for Economic Cooperation and Development [OECD] countries declined from 93% to 84% of the total share over the study period. High-income, OECD countries generally had higher fractional counts, which suggest less international collaboration, than lower income countries from 1999-2008. There was an inverse relationship between cardiovascular publications and age-standardized CVD morbidity and mortality rates, but a direct, curvilinear relationship between cardiovascular publications and Human Development Index from 1999-2008. Cardiovascular health research output has increased substantially in the past decade, with a greater share of citations being published from low- and middle-income countries. However, low- and middle-income countries with the higher burdens of cardiovascular disease continue to have lower research output than high-income countries, and thus require targeted research investments to improve cardiovascular health.

  7. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  8. Modeling and Design of Capacitive Micromachined Ultrasonic Transducers Based-on Database Optimization

    International Nuclear Information System (INIS)

    Chang, M W; Gwo, T J; Deng, T M; Chang, H C

    2006-01-01

    A Capacitive Micromachined Ultrasonic Transducers simulation database, based on electromechanical coupling theory, has been fully developed for versatile capacitive microtransducer design and analysis. Both arithmetic and graphic configurations are used to find optimal parameters based on serial coupling simulations. The key modeling parameters identified can improve microtransducer's character and reliability effectively. This method could be used to reduce design time and fabrication cost, eliminating trial-and-error procedures. Various microtransducers, with optimized characteristics, can be developed economically using the developed database. A simulation to design an ultrasonic microtransducer is completed as an executed example. The dependent relationship between membrane geometry, vibration displacement and output response is demonstrated. The electromechanical coupling effects, mechanical impedance and frequency response are also taken into consideration for optimal microstructures. The microdevice parameters with the best output signal response are predicted, and microfabrication processing constraints and realities are also taken into consideration

  9. A generalized adjoint framework for sensitivity and global error estimation in time-dependent nuclear reactor simulations

    International Nuclear Information System (INIS)

    Stripling, H.F.; Anitescu, M.; Adams, M.L.

    2013-01-01

    Highlights: ► We develop an abstract framework for computing the adjoint to the neutron/nuclide burnup equations posed as a system of differential algebraic equations. ► We validate use of the adjoint for computing both sensitivity to uncertain inputs and for estimating global time discretization error. ► Flexibility of the framework is leveraged to add heat transfer physics and compute its adjoint without a reformulation of the adjoint system. ► Such flexibility is crucial for high performance computing applications. -- Abstract: We develop a general framework for computing the adjoint variable to nuclear engineering problems governed by a set of differential–algebraic equations (DAEs). The nuclear engineering community has a rich history of developing and applying adjoints for sensitivity calculations; many such formulations, however, are specific to a certain set of equations, variables, or solution techniques. Any change or addition to the physics model would require a reformulation of the adjoint problem and substantial difficulties in its software implementation. In this work we propose an abstract framework that allows for the modification and expansion of the governing equations, leverages the existing theory of adjoint formulation for DAEs, and results in adjoint equations that can be used to efficiently compute sensitivities for parametric uncertainty quantification. Moreover, as we justify theoretically and demonstrate numerically, the same framework can be used to estimate global time discretization error. We first motivate the framework and show that the coupled Bateman and transport equations, which govern the time-dependent neutronic behavior of a nuclear reactor, may be formulated as a DAE system with a power constraint. We then use a variational approach to develop the parameter-dependent adjoint framework and apply existing theory to give formulations for sensitivity and global time discretization error estimates using the adjoint

  10. A Comparative Study on Recently-Introduced Nature-Based Global Optimization Methods in Complex Mechanical System Design

    Directory of Open Access Journals (Sweden)

    Abdulbaset El Hadi Saad

    2017-10-01

    Full Text Available Advanced global optimization algorithms have been continuously introduced and improved to solve various complex design optimization problems for which the objective and constraint functions can only be evaluated through computation intensive numerical analyses or simulations with a large number of design variables. The often implicit, multimodal, and ill-shaped objective and constraint functions in high-dimensional and “black-box” forms demand the search to be carried out using low number of function evaluations with high search efficiency and good robustness. This work investigates the performance of six recently introduced, nature-inspired global optimization methods: Artificial Bee Colony (ABC, Firefly Algorithm (FFA, Cuckoo Search (CS, Bat Algorithm (BA, Flower Pollination Algorithm (FPA and Grey Wolf Optimizer (GWO. These approaches are compared in terms of search efficiency and robustness in solving a set of representative benchmark problems in smooth-unimodal, non-smooth unimodal, smooth multimodal, and non-smooth multimodal function forms. In addition, four classic engineering optimization examples and a real-life complex mechanical system design optimization problem, floating offshore wind turbines design optimization, are used as additional test cases representing computationally-expensive black-box global optimization problems. Results from this comparative study show that the ability of these global optimization methods to obtain a good solution diminishes as the dimension of the problem, or number of design variables increases. Although none of these methods is universally capable, the study finds that GWO and ABC are more efficient on average than the other four in obtaining high quality solutions efficiently and consistently, solving 86% and 80% of the tested benchmark problems, respectively. The research contributes to future improvements of global optimization methods.

  11. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...

  12. Globalization and changing trends of biomedical research output

    Science.gov (United States)

    Conte, Marisa L.; Liu, Jing; Omary, M. Bishr

    2017-01-01

    The US continues to lead the world in research and development (R&D) expenditures, but there is concern that stagnation in federal support for biomedical research in the US could undermine the leading role the US has played in biomedical and clinical research discoveries. As a readout of research output in the US compared with other countries, assessment of original research articles published by US-based authors in ten clinical and basic science journals during 2000 to 2015 showed a steady decline of articles in high-ranking journals or no significant change in mid-ranking journals. In contrast, publication output originating from China-based investigators, in both high- and mid-ranking journals, has steadily increased commensurate with significant growth in R&D expenditures. These observations support the current concerns of stagnant and year-to-year uncertainty in US federal funding of biomedical research. PMID:28614799

  13. Population Structures in Russia: Optimality and Dependence on Parameters of Global Evolution

    Directory of Open Access Journals (Sweden)

    Yuri Yegorov

    2016-07-01

    Full Text Available The paper is devoted to analytical investigation of the division of geographical space into urban and rural areas with application to Russia. Yegorov (2005, 2006, 2009 has suggested the role of population density on economics. A city has an attractive potential based on scale economies. The optimal city size depends on the balance between its attractive potential and the cost of living that can be approximated by equilibrium land rent and commuting cost. For moderate scale effects optimal population of a city depends negatively on transport costs that are related positively with energy price index. The optimal agricultural density of population can also be constructed. The larger is a land slot per peasant, the higher will be the output from one unit of his labour force applied to this slot. But at the same time, larger farm size results in increase of energy costs, related to land development, collecting the crop and bringing it to the market. In the last 10 years we have observed substantial rise of both food and energy prices at the world stock markets. However, the income of farmers did not grow as fast as food price index. This can shift optimal rural population density to lower level, causing migration to cities (and we observe this tendency globally. Any change in those prices results in suboptimality of existing spatial structures. If changes are slow, the optimal infrastructure can be adjusted by simple migration. If the shocks are high, adaptation may be impossible and shock will persist. This took place in early 1990es in the former USSR, where after transition to world price for oil in domestic markets existing spatial infrastructure became suboptimal and resulted in persistent crisis, leading to deterioration of both industry and agriculture. Russia is the largest country but this is also its problem. Having large resource endowment per capita, it is problematic to build sufficient infrastructure. Russia has too low population

  14. Cloud Particles Differential Evolution Algorithm: A Novel Optimization Method for Global Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Wei Li

    2015-01-01

    Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.

  15. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  16. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  17. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  18. Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits

    Directory of Open Access Journals (Sweden)

    Kajsa Ljungberg

    2010-10-01

    Full Text Available Kajsa Ljungberg1, Kateryna Mishchenko2, Sverker Holmgren11Division of Scientific Computing, Department of Information Technology, Uppsala University, Uppsala, Sweden; 2Department of Mathematics and Physics, Mälardalen University College, Västerås, SwedenAbstract: We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called QTL, and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms.Keywords: global optimization, QTL mapping, DIRECT 

  19. A global optimization algorithm inspired in the behavior of selfish herds.

    Science.gov (United States)

    Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián

    2017-10-01

    In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Output feedback control of a quadrotor UAV using neural networks.

    Science.gov (United States)

    Dierks, Travis; Jagannathan, Sarangapani

    2010-01-01

    In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.

  1. Minimizing Symbol Error Rate for Cognitive Relaying with Opportunistic Access

    KAUST Repository

    Zafar, Ammar

    2012-12-29

    In this paper, we present an optimal resource allocation scheme (ORA) for an all-participate(AP) cognitive relay network that minimizes the symbol error rate (SER). The SER is derived and different constraints are considered on the system. We consider the cases of both individual and global power constraints, individual constraints only and global constraints only. Numerical results show that the ORA scheme outperforms the schemes with direct link only and uniform power allocation (UPA) in terms of minimizing the SER for all three cases of different constraints. Numerical results also show that the individual constraints only case provides the best performance at large signal-to-noise-ratio (SNR).

  2. Memetic Algorithms to Solve a Global Nonlinear Optimization Problem. A Review

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov

    2015-01-01

    Full Text Available In recent decades, evolutionary algorithms have proven themselves as the powerful optimization techniques of search engine. Their popularity is due to the fact that they are easy to implement and can be used in all areas, since they are based on the idea of universal evolution. For example, in the problems of a large number of local optima, the traditional optimization methods, usually, fail in finding the global optimum. To solve such problems using a variety of stochastic methods, in particular, the so-called population-based algorithms, which are a kind of evolutionary methods. The main disadvantage of this class of methods is their slow convergence to the exact solution in the neighborhood of the global optimum, as these methods incapable to use the local information about the landscape of the function. This often limits their use in largescale real-world problems where the computation time is a critical factor.One of the promising directions in the field of modern evolutionary computation are memetic algorithms, which can be regarded as a combination of population search of the global optimum and local procedures for verifying solutions, which gives a synergistic effect. In the context of memetic algorithms, the meme is an implementation of the local optimization method to refine solution in the search.The concept of memetic algorithms provides ample opportunities for the development of various modifications of these algorithms, which can vary the frequency of the local search, the conditions of its end, and so on. The practically significant memetic algorithm modifications involve the simultaneous use of different memes. Such algorithms are called multi-memetic.The paper gives statement of the global problem of nonlinear unconstrained optimization, describes the most promising areas of AI modifications, including hybridization and metaoptimization. The main content of the work is the classification and review of existing varieties of

  3. Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup

    International Nuclear Information System (INIS)

    Kim, Se Hoon; Baek, Seung Hun; Yang, Jung Kuk; Kim, Jun Ho; Kang, Joon Hee

    2005-01-01

    The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10 -50 per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10 -12 when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.

  4. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  5. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  6. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  7. A non-linear branch and cut method for solving discrete minimum compliance problems to global optimality

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Bendsøe, Martin P.

    2007-01-01

    This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....

  8. Theoretical properties of the global optimizer of two layer neural network

    OpenAIRE

    Boob, Digvijay; Lan, Guanghui

    2017-01-01

    In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves "almost" all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. ...

  9. Solving global optimization problems on GPU cluster

    Energy Technology Data Exchange (ETDEWEB)

    Barkalov, Konstantin; Gergel, Victor; Lebedev, Ilya [Lobachevsky State University of Nizhni Novgorod, Gagarin Avenue 23, 603950 Nizhni Novgorod (Russian Federation)

    2016-06-08

    The paper contains the results of investigation of a parallel global optimization algorithm combined with a dimension reduction scheme. This allows solving multidimensional problems by means of reducing to data-independent subproblems with smaller dimension solved in parallel. The new element implemented in the research consists in using several graphic accelerators at different computing nodes. The paper also includes results of solving problems of well-known multiextremal test class GKLS on Lobachevsky supercomputer using tens of thousands of GPU cores.

  10. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  11. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    Science.gov (United States)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  12. Global solutions through simulation for better decommissioning

    International Nuclear Information System (INIS)

    Scoto Di Suoccio, Ines; Testard, Vincent

    2016-01-01

    Decommissioning is a new activity in sense that it only exists a limited experience. Moreover, each facility is different due to their own history and there is no rule about choosing a decommissioning strategy. There are three major decommissioning strategies. First, 'immediate dismantling', which means the action of decommissioning begins immediately after the transfer of waste and nuclear material. Second, 'deferred dismantling strategy', which means that the facility is maintained into a containment zone from thirty to one hundred years before being decommissioned. Finally, 'entombment', means the facility is placed into a reinforced containment until the radionuclides decay and reach a level allowing the site release. When a strategy is decided many factors have to be taken into account. Into a major project such as a reactor decommissioning, there are many smaller projects. The decommissioning strategy can be different among these smaller projects. For some reasons, some entry data are not perfectly known. For example, dosimetric activity has not been updated through time or after specific events. Indeed, because of uncertainties and/or hypothesis existing around projects and their high level of interdependency, global solutions are a good way to choose the best decommissioning strategy. Actually, each entry data has consequences on output results whether it is on costs, cumulated dose, waste or delays. These output data are interdependent and cannot be taken apart from each other. Whether the dose, delays or waste management, all have impact on costs. To obtain an optimal scenario into a special environment, it is necessary to deal with all these items together. This global solution can be implemented thanks to simulation in dedicated software which helps to define the global strategy, to optimize the scenario, and to prevent contingencies. As a complete scenario simulation can be done quickly and efficiently, many strategies can

  13. An Algorithm for Global Optimization Inspired by Collective Animal Behavior

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2012-01-01

    Full Text Available A metaheuristic algorithm for global optimization called the collective animal behavior (CAB is introduced. Animal groups, such as schools of fish, flocks of birds, swarms of locusts, and herds of wildebeest, exhibit a variety of behaviors including swarming about a food source, milling around a central locations, or migrating over large distances in aligned groups. These collective behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency, to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, the searcher agents emulate a group of animals which interact with each other based on the biological laws of collective motion. The proposed method has been compared to other well-known optimization algorithms. The results show good performance of the proposed method when searching for a global optimum of several benchmark functions.

  14. Output Feedback Stabilization with Nonlinear Predictive Control: Asymptotic properties

    Directory of Open Access Journals (Sweden)

    Lars Imsland

    2003-07-01

    Full Text Available State space based nonlinear model predictive control (NM PC needs the state for the prediction of the system behaviour. Unfortunately, for most applications, not all states are directly measurable. To recover the unmeasured states, typically a stable state observer is used. However, this implies that the stability of the closed-loop should be examined carefully, since no general nonlinear separation principle exists. Recently semi-global practical stability results for output feedback NMPC using a high-gain observer for state estimation have been established. One drawback of this result is that (in general the observer gain must be increased, if the desired set the state should converge to is made smaller. We show that under slightly stronger assumptions, not only practical stability, but also convergence of the system states and observer error to the origin for a sufficiently large but bounded observer gain can be achieved.

  15. A global conformance quality model. A new strategic tool for minimizing defects caused by variation, error, and complexity

    Energy Technology Data Exchange (ETDEWEB)

    Hinckley, C. Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    1994-01-01

    The performance of Japanese products in the marketplace points to the dominant role of quality in product competition. Our focus is motivated by the tremendous pressure to improve conformance quality by reducing defects to previously unimaginable limits in the range of 1 to 10 parts per million. Toward this end, we have developed a new model of conformance quality that addresses each of the three principle defect sources: (1) Variation, (2) Human Error, and (3) Complexity. Although the role of variation in conformance quality is well documented, errors occur so infrequently that their significance is not well known. We have shown that statistical methods are not useful in characterizing and controlling errors, the most common source of defects. Excessive complexity is also a root source of defects, since it increases errors and variation defects. A missing link in the defining a global model has been the lack of a sound correlation between complexity and defects. We have used Design for Assembly (DFA) methods to quantify assembly complexity and have shown that assembly times can be described in terms of the Pareto distribution in a clear exception to the Central Limit Theorem. Within individual companies we have found defects to be highly correlated with DFA measures of complexity in broad studies covering tens of millions of assembly operations. Applying the global concepts, we predicted that Motorola`s Six Sigma method would only reduce defects by roughly a factor of two rather than orders of magnitude, a prediction confirmed by Motorola`s data. We have also shown that the potential defects rates of product concepts can be compared in the earliest stages of development. The global Conformance Quality Model has demonstrated that the best strategy for improvement depends upon the quality control strengths and weaknesses.

  16. Towards systematic evaluation of crop model outputs for global land-use models

    Science.gov (United States)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include

  17. Optimizing radiology peer review: a mathematical model for selecting future cases based on prior errors.

    Science.gov (United States)

    Sheu, Yun Robert; Feder, Elie; Balsim, Igor; Levin, Victor F; Bleicher, Andrew G; Branstetter, Barton F

    2010-06-01

    Peer review is an essential process for physicians because it facilitates improved quality of patient care and continuing physician learning and improvement. However, peer review often is not well received by radiologists who note that it is time intensive, is subjective, and lacks a demonstrable impact on patient care. Current advances in peer review include the RADPEER() system, with its standardization of discrepancies and incorporation of the peer-review process into the PACS itself. The purpose of this study was to build on RADPEER and similar systems by using a mathematical model to optimally select the types of cases to be reviewed, for each radiologist undergoing review, on the basis of the past frequency of interpretive error, the likelihood of morbidity from an error, the financial cost of an error, and the time required for the reviewing radiologist to interpret the study. The investigators compiled 612,890 preliminary radiology reports authored by residents and attending radiologists at a large tertiary care medical center from 1999 to 2004. Discrepancies between preliminary and final interpretations were classified by severity and validated by repeat review of major discrepancies. A mathematical model was then used to calculate, for each author of a preliminary report, the combined morbidity and financial costs of expected errors across 3 modalities (MRI, CT, and conventional radiography) and 4 departmental divisions (neuroradiology, abdominal imaging, musculoskeletal imaging, and thoracic imaging). A customized report was generated for each on-call radiologist that determined the category (modality and body part) with the highest total cost function. A universal total cost based on probability data from all radiologists was also compiled. The use of mathematical models to guide case selection could optimize the efficiency and effectiveness of physician time spent on peer review and produce more concrete and meaningful feedback to radiologists

  18. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  19. Construction of a Mean Square Error Adaptive Euler–Maruyama Method With Applications in Multilevel Monte Carlo

    KAUST Repository

    Hoel, Hakon

    2016-06-13

    A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).

  20. Transiently chaotic neural networks with piecewise linear output functions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.-S. [Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan (China); Shih, C.-W. [Department of Applied Mathematics, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan (China)], E-mail: cwshih@math.nctu.edu.tw

    2009-01-30

    Admitting both transient chaotic phase and convergent phase, the transiently chaotic neural network (TCNN) provides superior performance than the classical networks in solving combinatorial optimization problems. We derive concrete parameter conditions for these two essential dynamic phases of the TCNN with piecewise linear output function. The confirmation for chaotic dynamics of the system results from a successful application of the Marotto theorem which was recently clarified. Numerical simulation on applying the TCNN with piecewise linear output function is carried out to find the optimal solution of a travelling salesman problem. It is demonstrated that the performance is even better than the previous TCNN model with logistic output function.

  1. A global carbon assimilation system based on a dual optimization method

    Science.gov (United States)

    Zheng, H.; Li, Y.; Chen, J. M.; Wang, T.; Huang, Q.; Huang, W. X.; Wang, L. H.; Li, S. M.; Yuan, W. P.; Zheng, X.; Zhang, S. P.; Chen, Z. Q.; Jiang, F.

    2015-02-01

    Ecological models are effective tools for simulating the distribution of global carbon sources and sinks. However, these models often suffer from substantial biases due to inaccurate simulations of complex ecological processes. We introduce a set of scaling factors (parameters) to an ecological model on the basis of plant functional type (PFT) and latitudes. A global carbon assimilation system (GCAS-DOM) is developed by employing a dual optimization method (DOM) to invert the time-dependent ecological model parameter state and the net carbon flux state simultaneously. We use GCAS-DOM to estimate the global distribution of the CO2 flux on 1° × 1° grid cells for the period from 2001 to 2007. Results show that land and ocean absorb -3.63 ± 0.50 and -1.82 ± 0.16 Pg C yr-1, respectively. North America, Europe and China contribute -0.98 ± 0.15, -0.42 ± 0.08 and -0.20 ± 0.29 Pg C yr-1, respectively. The uncertainties in the flux after optimization by GCAS-DOM have been remarkably reduced by more than 60%. Through parameter optimization, GCAS-DOM can provide improved estimates of the carbon flux for each PFT. Coniferous forest (-0.97 ± 0.27 Pg C yr-1) is the largest contributor to the global carbon sink. Fluxes of once-dominant deciduous forest generated by the Boreal Ecosystems Productivity Simulator (BEPS) are reduced to -0.78 ± 0.23 Pg C yr-1, the third largest carbon sink.

  2. A kind of balance between exploitation and exploration on kriging for global optimization of expensive functions

    International Nuclear Information System (INIS)

    Dong, Huachao; Song, Baowei; Wang, Peng; Huang, Shuai

    2015-01-01

    In this paper, a novel kriging-based algorithm for global optimization of computationally expensive black-box functions is presented. This algorithm utilizes a multi-start approach to find all of the local optimal values of the surrogate model and performs searches within the neighboring area around these local optimal positions. Compared with traditional surrogate-based global optimization method, this algorithm provides another kind of balance between exploitation and exploration on kriging-based model. In addition, a new search strategy is proposed and coupled into this optimization process. The local search strategy employs a kind of improved 'Minimizing the predictor' method, which dynamically adjusts search direction and radius until finds the optimal value. Furthermore, the global search strategy utilizes the advantage of kriging-based model in predicting unexplored regions to guarantee the reliability of the algorithm. Finally, experiments on 13 test functions with six algorithms are set up and the results show that the proposed algorithm is very promising.

  3. Uncertainties in predicting solar panel power output

    Science.gov (United States)

    Anspaugh, B.

    1974-01-01

    The problem of calculating solar panel power output at launch and during a space mission is considered. The major sources of uncertainty and error in predicting the post launch electrical performance of the panel are considered. A general discussion of error analysis is given. Examples of uncertainty calculations are included. A general method of calculating the effect on the panel of various degrading environments is presented, with references supplied for specific methods. A technique for sizing a solar panel for a required mission power profile is developed.

  4. Optimized implementations of rational approximations for the Voigt and complex error function

    International Nuclear Information System (INIS)

    Schreier, Franz

    2011-01-01

    Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued functions. For the complex error function w(x+iy), whose real part is the Voigt function K(x,y), code optimizations of rational approximations are investigated. An assessment of requirements for atmospheric radiative transfer modeling indicates a y range over many orders of magnitude and accuracy better than 10 -4 . Following a brief survey of complex error function algorithms in general and rational function approximations in particular the problems associated with subdivisions of the x, y plane (i.e., conditional branches in the code) are discussed and practical aspects of Fortran and Python implementations are considered. Benchmark tests of a variety of algorithms demonstrate that programming language, compiler choice, and implementation details influence computational speed and there is no unique ranking of algorithms. A new implementation, based on subdivision of the upper half-plane in only two regions, combining Weideman's rational approximation for small |x|+y<15 and Humlicek's rational approximation otherwise is shown to be efficient and accurate for all x, y.

  5. Optimal design of link systems using successive zooming genetic algorithm

    Science.gov (United States)

    Kwon, Young-Doo; Sohn, Chang-hyun; Kwon, Soon-Bum; Lim, Jae-gyoo

    2009-07-01

    Link-systems have been around for a long time and are still used to control motion in diverse applications such as automobiles, robots and industrial machinery. This study presents a procedure involving the use of a genetic algorithm for the optimal design of single four-bar link systems and a double four-bar link system used in diesel engine. We adopted the Successive Zooming Genetic Algorithm (SZGA), which has one of the most rapid convergence rates among global search algorithms. The results are verified by experiment and the Recurdyn dynamic motion analysis package. During the optimal design of single four-bar link systems, we found in the case of identical input/output (IO) angles that the initial and final configurations show certain symmetry. For the double link system, we introduced weighting factors for the multi-objective functions, which minimize the difference between output angles, providing balanced engine performance, as well as the difference between final output angle and the desired magnitudes of final output angle. We adopted a graphical method to select a proper ratio between the weighting factors.

  6. The U.S. Navy's Global Wind-Wave Models: An Investigation into Sources of Errors in Low-Frequency Energy Predictions

    National Research Council Canada - National Science Library

    Rogers, W

    2002-01-01

    This report describes an investigation to determine the relative importance of various sources of error in the two global-scale models of wind-generated surface waves used operationally by the U.S. Navy...

  7. The Selective Impairment of the Phonological Output Buffer: Evidence From a Chinese Patient

    Directory of Open Access Journals (Sweden)

    Hua Shu

    2005-01-01

    Full Text Available We present a Chinese-speaking patient, SJ, who makes phonological errors across all tasks involving oral production. Detailed analyses of the errors across different tasks reveal that the patterns are very similar for reading, oral picture naming, and repetition tasks, which are also comparable to the error patterns of the phonological buffer deficit cases reported in the literature. The nature of the errors invites us to conclude that the patient's phonological output buffer is selectively impaired. Different from previously reported cases, SJ's deficits in oral production tasks are not accompanied by a similar impairment of writing performance. We argue that this dissociation is evidence that the phonological output buffer is not involved in writing Chinese words. Furthermore, the majority of SJ's errors occur at the onset of a syllable, indicating that the buffer has a structure that makes the onset more prone to impairment.

  8. Optimization Settings in the Fuzzy Combined Mamdani PID Controller

    Science.gov (United States)

    Kudinov, Y. I.; Pashchenko, F. F.; Pashchenko, A. F.; Kelina, A. Y.; Kolesnikov, V. A.

    2017-11-01

    In the present work the actual problem of determining the optimal settings of fuzzy parallel proportional-integral-derivative (PID) controller is considered to control nonlinear plants that is not always possible to perform with classical linear PID controllers. In contrast to the linear fuzzy PID controllers there are no analytical methods of settings calculation. In this paper, we develop a numerical optimization approach to determining the coefficients of a fuzzy PID controller. Decomposition method of optimization is proposed, the essence of which was as follows. All homogeneous coefficients were distributed to the relevant groups, for example, three error coefficients, the three coefficients of the changes of errors and the three coefficients of the outputs P, I and D components. Consistently in each of such groups the search algorithm was selected that has determined the coefficients under which we receive the schedule of the transition process satisfying all the applicable constraints. Thus, with the help of Matlab and Simulink in a reasonable time were found the factors of a fuzzy PID controller, which meet the accepted limitations on the transition process.

  9. Enhancement of the REMix energy system model. Global renewable energy potentials, optimized power plant siting and scenario validation

    Energy Technology Data Exchange (ETDEWEB)

    Stetter, Daniel

    2014-04-10

    As electricity generation based on volatile renewable resources is subject to fluctuations, data with high temporal and spatial resolution on their availability is indispensable for integrating large shares of renewable capacities into energy infrastructures. The scope of the present doctoral thesis is to enhance the existing energy modelling environment REMix in terms of (i.) extending the geographic coverage of the potential assessment tool REMix-EnDaT from a European to a global scale, (ii.) adding a new plant siting optimization module REMix-PlaSMo, capable of assessing siting effects of renewable power plants on the portfolio output and (iii.) adding a new alternating current power transmission model between 30 European countries and CSP electricity imports from power plants located in North Africa and the Middle East via high voltage direct current links into the module REMix-OptiMo. With respect to the global potential assessment tool, a thorough investigation is carried out creating an hourly global inventory of the theoretical potentials of the major renewable resources solar irradiance, wind speed and river discharge at a spatial resolution of 0.45°x0.45°. A detailed global land use analysis determines eligible sites for the installation of renewable power plants. Detailed power plant models for PV, CSP, wind and hydro power allow for the assessment of power output, cost per kWh and respective full load hours taking into account the theoretical potentials, technological as well as economic data. The so-obtined tool REMix-EnDaT can be used as follows: First, as an assessment tool for arbitrary geographic locations, countries or world regions, deriving either site-specific or aggregated installable capacities, cost as well as full load hour potentials. Second, as a tool providing input data such as installable capacities and hourly renewable electricity generation for further assessments using the modules REMix-PlasMo and OptiMo. The plant siting tool

  10. Enhancement of the REMix energy system model. Global renewable energy potentials, optimized power plant siting and scenario validation

    International Nuclear Information System (INIS)

    Stetter, Daniel

    2014-01-01

    As electricity generation based on volatile renewable resources is subject to fluctuations, data with high temporal and spatial resolution on their availability is indispensable for integrating large shares of renewable capacities into energy infrastructures. The scope of the present doctoral thesis is to enhance the existing energy modelling environment REMix in terms of (i.) extending the geographic coverage of the potential assessment tool REMix-EnDaT from a European to a global scale, (ii.) adding a new plant siting optimization module REMix-PlaSMo, capable of assessing siting effects of renewable power plants on the portfolio output and (iii.) adding a new alternating current power transmission model between 30 European countries and CSP electricity imports from power plants located in North Africa and the Middle East via high voltage direct current links into the module REMix-OptiMo. With respect to the global potential assessment tool, a thorough investigation is carried out creating an hourly global inventory of the theoretical potentials of the major renewable resources solar irradiance, wind speed and river discharge at a spatial resolution of 0.45°x0.45°. A detailed global land use analysis determines eligible sites for the installation of renewable power plants. Detailed power plant models for PV, CSP, wind and hydro power allow for the assessment of power output, cost per kWh and respective full load hours taking into account the theoretical potentials, technological as well as economic data. The so-obtined tool REMix-EnDaT can be used as follows: First, as an assessment tool for arbitrary geographic locations, countries or world regions, deriving either site-specific or aggregated installable capacities, cost as well as full load hour potentials. Second, as a tool providing input data such as installable capacities and hourly renewable electricity generation for further assessments using the modules REMix-PlasMo and OptiMo. The plant siting tool

  11. Disaggregate energy consumption and industrial output in the United States

    International Nuclear Information System (INIS)

    Ewing, Bradley T.; Sari, Ramazan; Soytas, Ugur

    2007-01-01

    This paper investigates the effect of disaggregate energy consumption on industrial output in the United States. Most of the related research utilizes aggregate data which may not indicate the relative strength or explanatory power of various energy inputs on output. We use monthly data and employ the generalized variance decomposition approach to assess the relative impacts of energy and employment on real output. Our results suggest that unexpected shocks to coal, natural gas and fossil fuel energy sources have the highest impacts on the variation of output, while several renewable sources exhibit considerable explanatory power as well. However, none of the energy sources explain more of the forecast error variance of industrial output than employment

  12. A note on a fatal error of optimized LFC private information retrieval scheme and its corrected results

    DEFF Research Database (Denmark)

    Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane

    2010-01-01

    A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....

  13. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  14. Error handling for the CDF Silicon Vertex Tracker

    CERN Document Server

    Belforte, S; Dell'Orso, Mauro; Donati, S; Galeotti, S; Giannetti, P; Morsani, F; Punzi, G; Ristori, L; Spinella, F; Zanetti, A M

    2000-01-01

    The SVT online tracker for the CDF upgrade reconstructs two- dimensional tracks using information from the Silicon Vertex detector (SVXII) and the Central Outer Tracker (COT). The SVT has an event rate of 100 kHz and a latency time of 10 mu s. The system is composed of 104 VME 9U digital boards (of 8 different types) and it is implemented as a data driven architecture. Each board runs on its own 30 MHz clock. Since the data output from the SVT (few Mbytes/sec) are a small fraction of the input data (200 Mbytes/sec), it is extremely difficult to track possible internal errors by using only the output stream. For this reason several diagnostic tools have been implemented: local error registers, error bits propagated through the data streams and the Spy Buffer system. Data flowing through each input and output stream of every board are continuously copied to memory banks named Spy Buffers which act as built in logic state analyzers hooked continuously to internal data streams. The contents of all buffers can be ...

  15. B-spline goal-oriented error estimators for geometrically nonlinear rods

    Science.gov (United States)

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  16. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    Science.gov (United States)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  17. Optimizing estimates of annual variations and trends in geocenter motion and J2 from a combination of GRACE data and geophysical models

    Science.gov (United States)

    Sun, Yu; Riva, Riccardo; Ditmar, Pavel

    2016-11-01

    The focus of the study is optimizing the technique for estimating geocenter motion and variations in J2 by combining data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission with output from an Ocean Bottom Pressure model and a Glacial Isostatic Adjustment (GIA) model. First, we conduct an end-to-end numerical simulation study. We generate input time-variable gravity field observations by perturbing a synthetic Earth model with realistically simulated errors. We show that it is important to avoid large errors at short wavelengths and signal leakage from land to ocean, as well as to account for self-attraction and loading effects. Second, the optimal implementation strategy is applied to real GRACE data. We show that the estimates of annual amplitude in geocenter motion are in line with estimates from other techniques, such as satellite laser ranging (SLR) and global GPS inversion. At the same time, annual amplitudes of C10 and C11 are increased by about 50% and 20%, respectively, compared to estimates based on Swenson et al. (2008). Estimates of J2 variations are by about 15% larger than SLR results in terms of annual amplitude. Linear trend estimates are dependent on the adopted GIA model but still comparable to some SLR results.

  18. The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-01-01

    Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.

  19. Dual Schroedinger Equation as Global Optimization Algorithm

    International Nuclear Information System (INIS)

    Huang Xiaofei; eGain Communications, Mountain View, CA 94043

    2011-01-01

    The dual Schroedinger equation is defined as replacing the imaginary number i by -1 in the original one. This paper shows that the dual equation shares the same stationary states as the original one. Different from the original one, it explicitly defines a dynamic process for a system to evolve from any state to lower energy states and eventually to the lowest one. Its power as a global optimization algorithm might be used by nature for constructing atoms and molecules. It shall be interesting to verify its existence in nature.

  20. Output Feedback M-MRAC Backstepping With Aerospace Applications

    Science.gov (United States)

    Stepanyan, Vahram; Krishnakumar, Kalmanje Sriniva

    2014-01-01

    The paper presents a certainty equivalence output feedback backstepping adaptive control design method for the systems of any relative degree with unmatched uncertainties without over-parametrization. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The approach is applied to aerospace control problems and tested in numerical simulations.

  1. Optimizing Cardiac Out-Put to Increase Cerebral Penumbral Perfusion in Large Middle Cerebral Artery Ischemic Lesion—OPTIMAL Study

    Directory of Open Access Journals (Sweden)

    Hannah Fuhrer

    2017-08-01

    Full Text Available IntroductionIn unsuccessful vessel recanalization, clinical outcome of acute stroke patients depends on early improvement of penumbral perfusion. So far, mean arterial blood pressure (MAP is the target hemodynamic parameter. However, the correlations of MAP to cardiac output (CO and cerebral perfusion are volume state dependent. In severe subarachnoid hemorrhage, optimizing CO leads to a reduction of delayed ischemic neurological deficits and improvement of clinical outcome. This study aims to investigate the effect of standard versus advanced cardiac monitoring with optimization of CO on the clinical outcome in patients with large ischemic stroke.Methods and analysisThe OPTIMAL study is a prospective, multicenter, open, into two arms (1:1 randomized, controlled trial. Sample size estimate: sample sizes of 150 for each treatment group (300 in total ensure an 80% power to detect a difference of 16% of a dichotomized level of functional clinical outcome at 3 months at a significance level of 0.05. Study outcomes: the primary endpoint is the functional outcome at 3 months. The secondary endpoints include functional outcome at 6 months follow-up, and complications related to hemodynamic monitoring and therapies.DiscussionThe results of this trial will provide data on the safety and efficacy of advanced hemodynamic monitoring on clinical outcome.Ethics and disseminationThe trial was approved by the leading ethics committee of Freiburg University, Germany (438/14, 2015 and the local ethics committees of the participating centers. The study is performed in accordance with the Declaration of Helsinki and the guidelines of Good Clinical Practice. It is registered in the German Clinical Trial register (DRKS; DRKS00007805. Dissemination will include submission to peer-reviewed professional journals and presentation at congresses. Hemodynamic monitoring may be altered in a specific stroke patient cohort if the study shows that advanced monitoring is

  2. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Collaborative Optimal Pricing and Day-Ahead and Intra-Day Integrative Dispatch of the Active Distribution Network with Multi-Type Active Loads

    Directory of Open Access Journals (Sweden)

    Chong Chen

    2018-04-01

    Full Text Available In order to better handle the new features that emerge at both ends of supply and demand, new measures are constantly being introduced, such as demand-side management (DSM and prediction of uncertain output and load. However, the existing DSM strategies, like real-time price (RTP, and dispatch methods are optimized separately, and response models of active loads, such as the interruptible load (IL, are still imperfect, which make it difficult for the active distribution network (ADN to achieve global optimal operation. Therefore, to better manage active loads, the response characteristics including both the response time and the responsibility and compensation model of IL for cluster users, and the real-time demand response model for price based load, were analyzed and established. Then, a collaborative optimization strategy of RTP and optimal dispatch of ADN was proposed, which can realize an economical operation based on mutual benefit and win-win mode of supply and demand sides. Finally, the day-ahead and intra-day integrative dispatch model using different time-scale prediction data was established, which can achieve longer-term optimization while reducing the impact of prediction errors on the dispatch results. With numerical simulations, the effectiveness and superiority of the proposed strategy were verified.

  4. Direct output feedback control of discrete-time systems

    International Nuclear Information System (INIS)

    Lin, C.C.; Chung, L.L.; Lu, K.H.

    1993-01-01

    An optimal direct output feedback control algorithm is developed for discrete-time systems with the consideration of time delay in control force action. Optimal constant output feedback gains are obtained through variational process such that certain prescribed quadratic performance index is minimized. Discrete-time control forces are then calculated from the multiplication of output measurements by these pre-calculated feedback gains. According to the proposed algorithm, structural system is assured to remain stable even in the presence of time delay. The number of sensors and controllers may be very small as compared with the dimension of states. Numerical results show that direct velocity feedback control is more sensitive to time delay than state feedback but, is still quite effective in reducing the dynamic responses under earthquake excitation. (author)

  5. Optimality of Multichannel Myopic Sensing in the Presence of Sensing Error for Opportunistic Spectrum Access

    Directory of Open Access Journals (Sweden)

    Xiaofeng Jiang

    2013-01-01

    Full Text Available The optimization problem for the performance of opportunistic spectrum access is considered in this study. A user, with the limited sensing capacity, has opportunistic access to a communication system with multiple channels. The user can only choose several channels to sense and decides whether to access these channels based on the sensing information in each time slot. Meanwhile, the presence of sensing error is considered. A reward is obtained when the user accesses a channel. The objective is to maximize the expected (discounted or average reward accrued over an infinite horizon. This problem can be formulated as a partially observable Markov decision process. This study shows the optimality of the simple and robust myopic policy which focuses on maximizing the immediate reward. The results show that the myopic policy is optimal in the case of practical interest.

  6. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  7. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  8. An accurate system for onsite calibration of electronic transformers with digital output

    International Nuclear Information System (INIS)

    Zhi Zhang; Li Hongbin

    2012-01-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  9. An accurate system for onsite calibration of electronic transformers with digital output.

    Science.gov (United States)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  10. An accurate system for onsite calibration of electronic transformers with digital output

    Energy Technology Data Exchange (ETDEWEB)

    Zhi Zhang; Li Hongbin [CEEE of HuaZhong University of Science and Technology, Wuhan 430074 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074 (China)

    2012-06-15

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  11. An accurate system for onsite calibration of electronic transformers with digital output

    Science.gov (United States)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  12. Optimizing the coupling of output of a quasi-optical gyrotron owing to a diffraction grating with ellipsoidal support

    International Nuclear Information System (INIS)

    Hogge, J.P.

    1993-12-01

    The output scheme of a quasi-optical gyrotron has been optimized in order to produce a gaussian output microwave beam suitable for transmission over long distances. The technique which has been applied consists of substituting one of the mirrors of the Fabry-Perot resonator in which the particle-wave interaction takes place by a diffraction grating placed in the -1 order Littrow mount and designed such that only orders -1 and 0 can propagate. In such a configuration, the diffraction angle of the order -1 coincides exactly with the incidence direction, thus providing a feedback in the cavity, whereas the order 0 constitutes the output of the resonator. A theoretical study of the power content in each diffracted order of a planar grating of infinite extent with equally spaced linear grooves as a function of the grating parameters has been performed. It has been shown that parameter domains can be found, which provide appropriate efficiencies in both orders for an application on a quasi-optical gyrotron. The Littrow condition was then adapted in order to match the spherical wavefronts of a gaussian beam incident on a possibly non-planar surface. The grooves become thus curvilinear and are no longer equally spaced. Measurements made on a cold test stand have confirmed the validity of the Littrow condition extension and allowed to determine its limits. It has also been shown that this type of cavity provides a mode having an optimal gaussian content and giving a minimal cavity transmission. The angular dispersion of the grating leads to a higher cavity transmission and to a slightly lower gaussian content for the adjacent resonator modes. The fundamental eigenmode electric field profile has been measured inside the cavity and is similar to that of an equivalent resonator made with two spherical mirrors. (author) figs., tabs., 141 refs

  13. Truss topology optimization with discrete design variables — Guaranteed global optimality and benchmark examples

    DEFF Research Database (Denmark)

    Achtziger, Wolfgang; Stolpe, Mathias

    2007-01-01

    this problem is well-studied for continuous bar areas, we consider in this study the case of discrete areas. This problem is of major practical relevance if the truss must be built from pre-produced bars with given areas. As a special case, we consider the design problem for a single available bar area, i.......e., a 0/1 problem. In contrast to the heuristic methods considered in many other approaches, our goal is to compute guaranteed globally optimal structures. This is done by a branch-and-bound method for which convergence can be proven. In this branch-and-bound framework, lower bounds of the optimal......-integer problems. The main intention of this paper is to provide optimal solutions for single and multiple load benchmark examples, which can be used for testing and validating other methods or heuristics for the treatment of this discrete topology design problem....

  14. Economic optimization of a global strategy to address the pandemic threat.

    Science.gov (United States)

    Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C; Daszak, Peter

    2014-12-30

    Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral "One Health" pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual.

  15. Global error minimization in image mosaicing using graph connectivity and its applications in microscopy

    Directory of Open Access Journals (Sweden)

    Parmeshwar Khurd

    2011-01-01

    Full Text Available Several applications such as multiprojector displays and microscopy require the mosaicing of images (tiles acquired by a camera as it traverses an unknown trajectory in 3D space. A homography relates the image coordinates of a point in each tile to those of a reference tile provided the 3D scene is planar. Our approach in such applications is to first perform pairwise alignment of the tiles that have imaged common regions in order to recover a homography relating the tile pair. We then find the global set of homographies relating each individual tile to a reference tile such that the homographies relating all tile pairs are kept as consistent as possible. Using these global homographies, one can generate a mosaic of the entire scene. We derive a general analytical solution for the global homographies by representing the pair-wise homographies on a connectivity graph. Our solution can accommodate imprecise prior information regarding the global homographies whenever such information is available. We also derive equations for the special case of translation estimation of an X-Y microscopy stage used in histology imaging and present examples of stitched microscopy slices of specimens obtained after radical prostatectomy or prostate biopsy. In addition, we demonstrate the superiority of our approach over tree-structured approaches for global error minimization.

  16. Global optimization driven by genetic algorithms for disruption predictors based on APODIS architecture

    Energy Technology Data Exchange (ETDEWEB)

    Rattá, G.A., E-mail: giuseppe.ratta@ciemat.es [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Vega, J. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Murari, A. [Consorzio RFX, Associazione EURATOM/ENEA per la Fusione, Padua (Italy); Dormido-Canto, S. [Dpto. de Informática y Automática, Universidad Nacional de Educación a Distancia, Madrid (Spain); Moreno, R. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.

  17. Global optimization driven by genetic algorithms for disruption predictors based on APODIS architecture

    International Nuclear Information System (INIS)

    Rattá, G.A.; Vega, J.; Murari, A.; Dormido-Canto, S.; Moreno, R.

    2016-01-01

    Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.

  18. Taxes, Tariffs, and The Global Corporation

    OpenAIRE

    James Levinsohn; Joel Slemrod

    1990-01-01

    In this paper we develop some simple models of optimal tax and tariff policy in the presence of global corporations that operate in an imperfectly competitive environment. The models emphasize two important differences in the practical application of tax and tariff policy - tax, but not tariff, policy can apply to offshore output and tariff, but not tax, policy can be industry-specific. Recognizing the multinationals' production decisions are endogenous to the tax and tariff policies they fac...

  19. Global optimization for quantum dynamics of few-fermion systems

    Science.gov (United States)

    Li, Xikun; Pecak, Daniel; Sowiński, Tomasz; Sherson, Jacob; Nielsen, Anne E. B.

    2018-03-01

    Quantum state preparation is vital to quantum computation and quantum information processing tasks. In adiabatic state preparation, the target state is theoretically obtained with nearly perfect fidelity if the control parameter is tuned slowly enough. As this, however, leads to slow dynamics, it is often desirable to be able to carry out processes more rapidly. In this work, we employ two global optimization methods to estimate the quantum speed limit for few-fermion systems confined in a one-dimensional harmonic trap. Such systems can be produced experimentally in a well-controlled manner. We determine the optimized control fields and achieve a reduction in the ramping time of more than a factor of four compared to linear ramping. We also investigate how robust the fidelity is to small variations of the control fields away from the optimized shapes.

  20. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    Science.gov (United States)

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  1. Computational Intelligence Techniques Applied to the Day Ahead PV Output Power Forecast: PHANN, SNO and Mixed

    Directory of Open Access Journals (Sweden)

    Emanuele Ogliari

    2018-06-01

    Full Text Available An accurate forecast of the exploitable energy from Renewable Energy Sources is extremely important for the stability issues of the electric grid and the reliability of the bidding markets. This paper presents a comparison among different forecasting methods of the photovoltaic output power introducing a new method that mixes some peculiarities of the others: the Physical Hybrid Artificial Neural Network and the five parameters model estimated by the Social Network Optimization. In particular, the day-ahead forecasts evaluated against real data measured for two years in an existing photovoltaic plant located in Milan, Italy, are compared by means both new and the most common error indicators. Results reported in this work show the best forecasting capability of the new “mixed method” which scored the best forecast skill and Enveloped Mean Absolute Error on a yearly basis (47% and 24.67%, respectively.

  2. Joint source and relay optimization for interference MIMO relay networks

    Science.gov (United States)

    Khandaker, Muhammad R. A.; Wong, Kai-Kit

    2017-12-01

    This paper considers multiple-input multiple-output (MIMO) relay communication in multi-cellular (interference) systems in which MIMO source-destination pairs communicate simultaneously. It is assumed that due to severe attenuation and/or shadowing effects, communication links can be established only with the aid of a relay node. The aim is to minimize the maximal mean-square-error (MSE) among all the receiving nodes under constrained source and relay transmit powers. Both one- and two-way amplify-and-forward (AF) relaying mechanisms are considered. Since the exactly optimal solution for this practically appealing problem is intractable, we first propose optimizing the source, relay, and receiver matrices in an alternating fashion. Then we contrive a simplified semidefinite programming (SDP) solution based on the error covariance matrix decomposition technique, avoiding the high complexity of the iterative process. Numerical results reveal the effectiveness of the proposed schemes.

  3. An Optimal Method for Developing Global Supply Chain Management System

    Directory of Open Access Journals (Sweden)

    Hao-Chun Lu

    2013-01-01

    Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.

  4. Model Optimization Identification Method Based on Closed-loop Operation Data and Process Characteristics Parameters

    Directory of Open Access Journals (Sweden)

    Zhiqiang GENG

    2014-01-01

    Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.

  5. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  6. Global structural optimizations of surface systems with a genetic algorithm

    International Nuclear Information System (INIS)

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al n (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems

  7. Ac-dc converter firing error detection

    International Nuclear Information System (INIS)

    Gould, O.L.

    1996-01-01

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal

  8. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  9. COA based robust output feedback UPFC controller design

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)

    2010-12-15

    In this paper, a novel method for the design of output feedback controller for unified power flow controller (UPFC) using chaotic optimization algorithm (COA) is developed. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from the local optimum, is a promising tool for the engineering applications. The selection of the output feedback gains for the UPFC controllers is converted to an optimization problem with the time domain-based objective function which is solved by a COA based on Lozi map. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization problem introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. To ensure the robustness of the proposed stabilizers, the design process takes into account a wide range of operating conditions and system configurations. The effectiveness of the proposed controller for damping low frequency oscillations is tested and demonstrated through non-linear time-domain simulation and some performance indices studies. The results analysis reveals that the designed COA based output feedback UPFC damping controller has an excellent capability in damping power system low frequency oscillations and enhance greatly the dynamic stability of the power systems.

  10. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    Science.gov (United States)

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Elliptical multiple-output quantile regression and convex optimization

    Czech Academy of Sciences Publication Activity Database

    Hallin, M.; Šiman, Miroslav

    2016-01-01

    Roč. 109, č. 1 (2016), s. 232-237 ISSN 0167-7152 R&D Projects: GA ČR GA14-07234S Institutional support: RVO:67985556 Keywords : quantile regression * elliptical quantile * multivariate quantile * multiple-output regression Subject RIV: BA - General Mathematics Impact factor: 0.540, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/siman-0458243.pdf

  12. Fuzzy logic control and optimization system

    Science.gov (United States)

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  13. A Global Network Alignment Method Using Discrete Particle Swarm Optimization.

    Science.gov (United States)

    Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia

    2016-10-19

    Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.

  14. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  15. Dense Output for Strong Stability Preserving Runge–Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2016-12-10

    We investigate dense output formulae (also known as continuous extensions) for strong stability preserving (SSP) Runge–Kutta methods. We require that the dense output formula also possess the SSP property, ideally under the same step-size restriction as the method itself. A general recipe for first-order SSP dense output formulae for SSP methods is given, and second-order dense output formulae for several optimal SSP methods are developed. It is shown that SSP dense output formulae of order three and higher do not exist, and that in any method possessing a second-order SSP dense output, the coefficient matrix A has a zero row.

  16. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    Science.gov (United States)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  17. Decoding suprathreshold stochastic resonance with optimal weights

    International Nuclear Information System (INIS)

    Xu, Liyan; Vladusich, Tony; Duan, Fabing; Gunn, Lachlan J.; Abbott, Derek; McDonnell, Mark D.

    2015-01-01

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible

  18. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    Directory of Open Access Journals (Sweden)

    Younes Saadi

    Full Text Available The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive and exploitation (intensive of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be

  19. Simulated Annealing-Based Krill Herd Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Gai-Ge Wang

    2013-01-01

    Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.

  20. MO-F-CAMPUS-T-04: Implementation of a Standardized Monthly Quality Check for Linac Output Management in a Large Multi-Site Clinic

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Yi, B; Prado, K [Univ. of Maryland School Of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: This work is to investigate the feasibility of a standardized monthly quality check (QC) of LINAC output determination in a multi-site, multi-LINAC institution. The QC was developed to determine individual LINAC output using the same optimized measurement setup and a constant calibration factor for all machines across the institution. Methods: The QA data over 4 years of 7 Varian machines over four sites, were analyzed. The monthly output constancy checks were performed using a fixed source-to-chamber-distance (SCD), with no couch position adjustment throughout the measurement cycle for all the photon energies: 6 and 18MV, and electron energies: 6, 9, 12, 16 and 20 MeV. The constant monthly output calibration factor (Nconst) was determined by averaging the machines’ output data, acquired with the same monthly ion chamber. If a different monthly ion chamber was used, Nconst was then re-normalized to consider its different NDW,Co-60. Here, the possible changes of Nconst over 4 years have been tracked, and the precision of output results based on this standardized monthly QA program relative to the TG-51 calibration for each machine was calculated. Any outlier of the group was investigated. Results: The possible changes of Nconst varied between 0–0.9% over 4 years. The normalization of absorbed-dose-to-water calibration factors corrects for up to 3.3% variations of different monthly QA chambers. The LINAC output precision based on this standardized monthly QC relative to the TG-51 output calibration is within 1% for 6MV photon energy and 2% for 18MV and all the electron energies. A human error in one TG-51 report was found through a close scrutiny of outlier data. Conclusion: This standardized QC allows for a reasonably simplified, precise and robust monthly LINAC output constancy check, with the increased sensitivity needed to detect possible human errors and machine problems.

  1. Moving-window dynamic optimization: design of stimulation profiles for walking.

    Science.gov (United States)

    Dosen, Strahinja; Popović, Dejan B

    2009-05-01

    The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.

  2. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  3. Head and bit patterned media optimization at areal densities of 2.5 Tbit/in2 and beyond

    International Nuclear Information System (INIS)

    Bashir, M.A.; Schrefl, T.; Dean, J.; Goncharov, A.; Hrkac, G.; Allwood, D.A.; Suess, D.

    2012-01-01

    Global optimization of writing head is performed using micromagnetics and surrogate optimization. The shape of the pole tip is optimized for bit patterned, exchange spring recording media. The media characteristics define the effective write field and the threshold values for the head field that acts at islands in the adjacent track. Once the required head field characteristics are defined, the pole tip geometry is optimized in order to achieve a high gradient of the effective write field while keeping the write field at the adjacent track below a given value. We computed the write error rate and the adjacent track erasure for different maximum anisotropy in the multilayer, graded media. The results show a linear trade off between the error rate and the number of passes before erasure. For optimal head media combinations we found a bit error rate of 10 -6 with 10 8 pass lines before erasure at 2.5 Tbit/in 2 . - Research Highlights: → Global optimization of writing head is performed using micromagnetics and surrogate optimization. → A method is provided to optimize the pole tip shape while maintaining the head field that acts in the adjacent tracks. → Patterned media structures providing an area density of 2.5 Tbit/in 2 are discussed as a case study. → Media reliability is studied, while taking into account, the magnetostatic field interactions from neighbouring islands and adjacent track erasure under the influence of head field.

  4. The measurement of temperature effect of light output of scintillators

    International Nuclear Information System (INIS)

    Ji Changsong; Zhou Zaiping; Zhang Longfang

    1999-01-01

    The author describes a experiment equipment used for measurement of temperature effect of light output of scintillators; gives some measurement results of temperature effect of light output for NaI(Tl), CsI(Tl), plastic scintillator, ZnS(Ag), anthracene crystal glass scintillator; analyzes the error factors affecting the measurement results. The total uncertainty of the temperature effect measurement for NaI(Tl) and plastic scintillator is 11%

  5. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  6. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  7. Improved SpikeProp for Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Falah Y. H. Ahmed

    2013-01-01

    Full Text Available A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.

  8. Economic optimization of a global strategy to address the pandemic threat

    Science.gov (United States)

    Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C.; Daszak, Peter

    2014-01-01

    Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral “One Health” pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual. PMID:25512538

  9. CO2 emissions, energy usage, and output in Central America

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Payne, James E.

    2009-01-01

    This study extends the recent work of Ang (2007) [Ang, J.B., 2007. CO 2 emissions, energy consumption, and output in France. Energy Policy 35, 4772-4778] in examining the causal relationship between carbon dioxide emissions, energy consumption, and output within a panel vector error correction model for six Central American countries over the period 1971-2004. In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and emissions.

  10. Optimization of a predictive controller of a pressurized water reactor Xenon oscillation using the particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Medeiros, Jose Antonio Carlos Canedo; Machado, Marcelo Dornellas; Lima, Alan Miranda M. de; Schirru, Roberto

    2007-01-01

    Predictive control systems are control systems that use a model of the controlled system (plant), used to predict the future behavior of the plant allowing the establishment of an anticipative control based on a future condition of the plant, and an optimizer that, considering a future time horizon of the plant output and a recent horizon of the control action, determines the controller's outputs to optimize a performance index of the controlled plant. The predictive control system does not require analytical models of the plant; the model of predictor of the plant can be learned from historical data of operation of the plant. The optimizer of the predictive controller establishes the strategy of the control: the minimization of a performance index (objective function) is done so that the present and future control actions are computed in such a way to minimize the objective function. The control strategy, implemented by the optimizer, induces the formation of an optimal control mechanism whose effect is to reduce the stabilization time, the 'overshoot' and 'undershoot', minimize the control actuation so that a compromise among those objectives is attained. The optimizer of the predictive controller is usually implemented using gradient-based algorithms. In this work we use the Particle Swarm Optimization algorithm (PSO) in the optimizer component of a predictive controller applied in the control of the xenon oscillation of a pressurized water reactor (PWR). The PSO is a stochastic optimization technique applied in several disciplines, simple and capable of providing a global optimal for high complexity problems and difficult to be optimized, providing in many cases better results than those obtained by other conventional and/or other artificial optimization techniques. (author)

  11. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  12. Study and optimal correction of a systematic skew quadrupole field in the Tevatron

    International Nuclear Information System (INIS)

    Snopok, Pavel; Johnstone, Carol; Berz, Martin; Ovsyannikov, Dmitry A.; Ovsyannikov, Alexander D.

    2006-01-01

    Increasing demands for luminosity in existing and future colliders have made lattice design and error tolerance and correction critical to achieving performance goals. The current state of the Tevatron collider is an example, with a strong skew quadrupole error present in the operational lattice. This work studies the high-order performance of the Tevatron and the strong nonlinear behavior introduced when a significant skew quadrupole error is combined with conventional sextupole correction, a behavior still clearly evident after optimal tuning of available skew quadrupole circuits. An optimization study is performed using different skew quadrupole families, and, importantly, local and global correction of the linear skew terms in maps generated by the code COSY INFINITY [M. Berz, COSY INFINITY version 8.1 user's guide and reference manual, Department of Physics and Astronomy MSUHEP-20704, Michigan State University (2002). URL http://cosy.pa.msu.edu/cosymanu/index.html]. Two correction schemes with one family locally correcting each arc and eight independent correctors in the straight sections for global correction are proposed and shown to dramatically improve linearity and performance of the baseline Tevatron lattice

  13. Simulations of beam trajectory for position target optimization of extraction system output beams cyclotron proton Decy-13

    International Nuclear Information System (INIS)

    Idrus Abdul Kudus; Taufik

    2015-01-01

    Positioning and track simulation beam the cyclotron Decy-13 for laying optimization the target system have been done using lorentz force function and scilab 5.4.1 simulation. Magnetic field and electric field is calculated using Opera3D/Tosca as a simulation input. Used radio frequency is 77.66 MHz with the amplitude voltage is 40 kV is obtained energy 13 MeV. The result showed that the coordinates of the laying of the target system in a vacuum chamber is located at x = -389 mm and y = 445 mm with the width of the output beam is 10 mm. The laying stripper position for the output in center target is located at x = -76 mm and y =416 mm from the center coordinate on the center of dee with the energy of proton is 13 MeV at the point of beam extraction carbon foil. The changes position laying is carried out on range x = -70; y = 424 mm until x = - 118; y = 374 mm result for shifting area stripper which is still capable of deflection the electron beam. (author)

  14. A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions

    KAUST Repository

    Fowkes, Jaroslav M.

    2012-06-21

    We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations. © 2012 Springer Science+Business Media, LLC.

  15. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    Directory of Open Access Journals (Sweden)

    Ji Li

    2016-10-01

    Full Text Available A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  16. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  17. Using lexical variables to predict picture-naming errors in jargon aphasia

    Directory of Open Access Journals (Sweden)

    Catherine Godbold

    2015-04-01

    Full Text Available Introduction Individuals with jargon aphasia produce fluent output which often comprises high proportions of non-word errors (e.g., maf for dog. Research has been devoted to identifying the underlying mechanisms behind such output. Some accounts posit a reduced flow of spreading activation between levels in the lexical network (e.g., Robson et al., 2003. If activation level differences across the lexical network are a cause of non-word outputs, we would predict improved performance when target items reflect an increased flow of activation between levels (e.g. more frequently-used words are often represented by higher resting levels of activation. This research investigates the effect of lexical properties of targets (e.g., frequency, imageability on accuracy, error type (real word vs. non-word and target-error overlap of non-word errors in a picture naming task by individuals with jargon aphasia. Method Participants were 17 individuals with Wernicke’s aphasia, who produced a high proportion of non-word errors (>20% of errors on the Philadelphia Naming Test (PNT; Roach et al., 1996. The data were retrieved from the Moss Aphasic Psycholinguistic Database Project (MAPPD, Mirman et al., 2010. We used a series of mixed models to test whether lexical variables predicted accuracy, error type (real word vs. non-word and target-error overlap for the PNT data. As lexical variables tend to be highly correlated, we performed a principal components analysis to reduce the variables into five components representing variables associated with phonology (length, phonotactic probability, neighbourhood density and neighbourhood frequency, semantics (imageability and concreteness, usage (frequency and age-of-acquisition, name agreement and visual complexity. Results and Discussion Table 1 shows the components that made a significant contribution to each model. Individuals with jargon aphasia produced more correct responses and fewer non-word errors relative to

  18. Turbulent Output-Based Anisotropic Adaptation

    Science.gov (United States)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  19. A variable structure fuzzy neural network model of squamous dysplasia and esophageal squamous cell carcinoma based on a global chaotic optimization algorithm.

    Science.gov (United States)

    Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza

    2013-02-07

    Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Output, renewable energy consumption and trade in Africa

    International Nuclear Information System (INIS)

    Ben Aïssa, Mohamed Safouane; Ben Jebli, Mehdi; Ben Youssef, Slim

    2014-01-01

    We use panel cointegration techniques to examine the relationship between renewable energy consumption, trade and output in a sample of 11 African countries covering the period 1980–2008. The results from panel error correction model reveal that there is evidence of a bidirectional causality between output and exports and between output and imports in both the short and long-run. However, in the short-run, there is no evidence of causality between output and renewable energy consumption and between trade (exports or imports) and renewable energy consumption. Also, in the long-run, there is no causality running from output or trade to renewable energy. In the long-run, our estimations show that renewable energy consumption and trade have a statistically significant and positive impact on output. Our energy policy recommendations are that national authorities should design appropriate fiscal incentives to encourage the use of renewable energies, create more regional economic integration for renewable energy technologies, and encourage trade openness because of its positive impact on technology transfer and on output. - Highlights: • We examine the relationship between renewable energy consumption, trade and output in African countries. • There is a bidirectional causality between output and trade in both the short and long-run. • In the short-run, there is no causality between renewable energy consumption and trade or output. • In the long-run, renewable energy consumption and trade have a statistically significant positive impact on output. • African authorities should encourage trade openness because of its positive impact on technology transfer and on output

  1. An Alternate Approach to Optimal L 2 -Error Analysis of Semidiscrete Galerkin Methods for Linear Parabolic Problems with Nonsmooth Initial Data

    KAUST Repository

    Goswami, Deepjyoti

    2011-09-01

    In this article, we propose and analyze an alternate proof of a priori error estimates for semidiscrete Galerkin approximations to a general second order linear parabolic initial and boundary value problem with rough initial data. Our analysis is based on energy arguments without using parabolic duality. Further, it follows the spirit of the proof technique used for deriving optimal error estimates for finite element approximations to parabolic problems with smooth initial data and hence, it unifies both theories, that is, one for smooth initial data and other for nonsmooth data. Moreover, the proposed technique is also extended to a semidiscrete mixed method for linear parabolic problems. In both cases, optimal L2-error estimates are derived, when the initial data is in L2. A superconvergence phenomenon is also observed, which is then used to prove L∞-estimates for linear parabolic problems defined on two-dimensional spatial domain again with rough initial data. Copyright © Taylor & Francis Group, LLC.

  2. Polynomial optimization : Error analysis and applications

    NARCIS (Netherlands)

    Sun, Zhao

    2015-01-01

    Polynomial optimization is the problem of minimizing a polynomial function subject to polynomial inequality constraints. In this thesis we investigate several hierarchies of relaxations for polynomial optimization problems. Our main interest lies in understanding their performance, in particular how

  3. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  4. Finite-time output feedback stabilization of high-order uncertain nonlinear systems

    Science.gov (United States)

    Jiang, Meng-Meng; Xie, Xue-Jun; Zhang, Kemei

    2018-06-01

    This paper studies the problem of finite-time output feedback stabilization for a class of high-order nonlinear systems with the unknown output function and control coefficients. Under the weaker assumption that output function is only continuous, by using homogeneous domination method together with adding a power integrator method, introducing a new analysis method, the maximal open sector Ω of output function is given. As long as output function belongs to any closed sector included in Ω, an output feedback controller can be developed to guarantee global finite-time stability of the closed-loop system.

  5. Report of the Error and Emittance Task Force on the superconducting super collider: Part 1, Resistive machines

    International Nuclear Information System (INIS)

    1993-10-01

    A review of the design and specifications of the resistive accelerators in the SSC complex was conducted during the past year. This review was initiated in response to a request from the SSC Project Manager. The Error and Emittance Task Force was created October 30, 1992, and charged with reviewing issues associated with the specification of errors and tolerances throughout the injector chain and in the Collider, and to optimize the global error budget. Effects which directly impact the emittance budget were of prime importance. The Task Force responded to three charges: Examination of the resistive accelerators and their injection and extraction systems; examination of the connecting beamlines and the overall approach taken in their design; and global filling, timing, and synchronization issues. The High Energy Booster and the Collider were deemed to be sufficiently different from the resistive accelerators that it was decided to treat them as a separate group. They will be the subject of a second part to this report

  6. A Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  7. Estimation of hourly global solar irradiation on tilted planes from horizontal one using artificial neural networks

    International Nuclear Information System (INIS)

    Notton, Gilles; Paoli, Christophe; Vasileva, Siyana; Nivet, Marie Laure; Canaletti, Jean-Louis; Cristofari, Christian

    2012-01-01

    Calculating global solar irradiation from global horizontal irradiation only is a difficult task, especially when the time step is small and the data are not averaged. We used an Artificial Neural Network (ANN) to realize this conversion. The ANN is optimized and tested on the basis of five years of solar data; the accuracy of the optimal configuration is around 6% for the RRMSE (relative root mean square error) and around 3.5% for the RMAE (relative mean absolute value) i.e. a better performance than the empirical correlations available in the literature. -- Highlights: ► ANN (Artificial Neural Network) methodology applied to hourly global solar irradiation in order to estimate tilted irradiations. ► Model validation with more than 23,000 data. ► Comparison with “conventional” models. ► The precision in the results is better than with empirical correlations. ► 6% for the RMSE (root means square error) and around 3.5% for the RMAE (Relative Mean Absolute Value).

  8. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  9. Constrained Fuzzy Predictive Control Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Oussama Ait Sahed

    2015-01-01

    Full Text Available A fuzzy predictive controller using particle swarm optimization (PSO approach is proposed. The aim is to develop an efficient algorithm that is able to handle the relatively complex optimization problem with minimal computational time. This can be achieved using reduced population size and small number of iterations. In this algorithm, instead of using the uniform distribution as in the conventional PSO algorithm, the initial particles positions are distributed according to the normal distribution law, within the area around the best position. The radius limiting this area is adaptively changed according to the tracking error values. Moreover, the choice of the initial best position is based on prior knowledge about the search space landscape and the fact that in most practical applications the dynamic optimization problem changes are gradual. The efficiency of the proposed control algorithm is evaluated by considering the control of the model of a 4 × 4 Multi-Input Multi-Output industrial boiler. This model is characterized by being nonlinear with high interactions between its inputs and outputs, having a nonminimum phase behaviour, and containing instabilities and time delays. The obtained results are compared to those of the control algorithms based on the conventional PSO and the linear approach.

  10. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  11. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  12. Dispositional Optimism and Terminal Decline in Global Quality of Life

    Science.gov (United States)

    Zaslavsky, Oleg; Palgi, Yuval; Rillamas-Sun, Eileen; LaCroix, Andrea Z.; Schnall, Eliezer; Woods, Nancy F.; Cochrane, Barbara B.; Garcia, Lorena; Hingle, Melanie; Post, Stephen; Seguin, Rebecca; Tindle, Hilary; Shrira, Amit

    2015-01-01

    We examined whether dispositional optimism relates to change in global quality of life (QOL) as a function of either chronological age or years to impending death. We used a sample of 2,096 deceased postmenopausal women from the Women's Health Initiative clinical trials who were enrolled in the 2005-2010 Extension Study and for whom at least 1…

  13. Optimal beamforming in MIMO systems with HPA nonlinearity

    KAUST Repository

    Qi, Jian

    2010-09-01

    In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance. ©2010 IEEE.

  14. Optimal beamforming in MIMO systems with HPA nonlinearity

    KAUST Repository

    Qi, Jian; Aissa, Sonia

    2010-01-01

    In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance. ©2010 IEEE.

  15. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  16. Incremental passivity and output regulation for switched nonlinear systems

    Science.gov (United States)

    Pang, Hongbo; Zhao, Jun

    2017-10-01

    This paper studies incremental passivity and global output regulation for switched nonlinear systems, whose subsystems are not required to be incrementally passive. A concept of incremental passivity for switched systems is put forward. First, a switched system is rendered incrementally passive by the design of a state-dependent switching law. Second, the feedback incremental passification is achieved by the design of a state-dependent switching law and a set of state feedback controllers. Finally, we show that once the incremental passivity for switched nonlinear systems is assured, the output regulation problem is solved by the design of global nonlinear regulator controllers comprising two components: the steady-state control and the linear output feedback stabilising controllers, even though the problem for none of subsystems is solvable. Two examples are presented to illustrate the effectiveness of the proposed approach.

  17. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  18. Identification of metabolic system parameters using global optimization methods

    Directory of Open Access Journals (Sweden)

    Gatzke Edward P

    2006-01-01

    Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.

  19. Global optimal path planning of an autonomous vehicle for overtaking a moving obstacle

    Directory of Open Access Journals (Sweden)

    B. Mashadi

    Full Text Available In this paper, the global optimal path planning of an autonomous vehicle for overtaking a moving obstacle is proposed. In this study, the autonomous vehicle overtakes a moving vehicle by performing a double lane-change maneuver after detecting it in a proper distance ahead. The optimal path of vehicle for performing the lane-change maneuver is generated by a path planning program in which the sum of lateral deviation of the vehicle from a reference path and the rate of steering angle become minimum while the lateral acceleration of vehicle does not exceed a safe limit value. A nonlinear optimal control theory with the lateral vehicle dynamics equations and inequality constraint of lateral acceleration are used to generate the path. The indirect approach for solving the optimal control problem is used by applying the calculus of variation and the Pontryagin's Minimum Principle to obtain first-order necessary conditions for optimality. The optimal path is generated as a global optimal solution and can be used as the benchmark of the path generated by the local motion planning of autonomous vehicles. A full nonlinear vehicle model in CarSim software is used for path following simulation by importing path data from the MATLAB code. The simulation results show that the generated path for the autonomous vehicle satisfies all vehicle dynamics constraints and hence is a suitable overtaking path for the following vehicle.

  20. Material discovery by combining stochastic surface walking global optimization with a neural network.

    Science.gov (United States)

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  1. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    Energy Technology Data Exchange (ETDEWEB)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil [' ' Dunarea de Jos' ' University of Galati, 47, Domneasca, 800008-Galati (Romania)

    2010-02-15

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results. (author)

  2. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    International Nuclear Information System (INIS)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil

    2010-01-01

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results.

  3. On the Performance of Linear Decreasing Inertia Weight Particle Swarm Optimization for Global Optimization

    Science.gov (United States)

    Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka

    2013-01-01

    Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383

  4. Neural-Fuzzy Digital Strategy of Continuous-Time Nonlinear Systems Using Adaptive Prediction and Random-Local-Optimization Design

    Directory of Open Access Journals (Sweden)

    Zhi-Ren Tsai

    2013-01-01

    Full Text Available A tracking problem, time-delay, uncertainty and stability analysis of a predictive control system are considered. The predictive control design is based on the input and output of neural plant model (NPM, and a recursive fuzzy predictive tracker has scaling factors which limit the value zone of measured data and cause the tuned parameters to converge to obtain a robust control performance. To improve the further control performance, the proposed random-local-optimization design (RLO for a model/controller uses offline initialization to obtain a near global optimal model/controller. Other issues are the considerations of modeling error, input-delay, sampling distortion, cost, greater flexibility, and highly reliable digital products of the model-based controller for the continuous-time (CT nonlinear system. They are solved by a recommended two-stage control design with the first-stage (offline RLO and second-stage (online adaptive steps. A theorizing method is then put forward to replace the sensitivity calculation, which reduces the calculation of Jacobin matrices of the back-propagation (BP method. Finally, the feedforward input of reference signals helps the digital fuzzy controller improve the control performance, and the technique works to control the CT systems precisely.

  5. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  6. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  7. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  8. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  9. A controls engineering approach for analyzing airplane input-output characteristics

    Science.gov (United States)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  10. Determination of global positioning system (GPS) receiver clock errors: impact on positioning accuracy

    International Nuclear Information System (INIS)

    Yeh, Ta-Kang; Hwang, Cheinway; Xu, Guochang; Wang, Chuan-Sheng; Lee, Chien-Chih

    2009-01-01

    Enhancing the positioning precision is the primary pursuit of global positioning system (GPS) users. To achieve this goal, most studies have focused on the relationship between GPS receiver clock errors and GPS positioning precision. This study utilizes undifferentiated phase data to calculate GPS clock errors and to compare with the frequency of cesium clock directly, to verify estimated clock errors by the method used in this paper. The frequency stability calculated from this paper (the indirect method) and measured from the National Standard Time and Frequency Laboratory (NSTFL) of Taiwan (the direct method) match to 1.5 × 10 −12 (the value from this study was smaller than that from NSTFL), suggesting that the proposed technique has reached a certain level of quality. The built-in quartz clocks in the GPS receivers yield relative frequency offsets that are 3–4 orders higher than those of rubidium clocks. The frequency stability of the quartz clocks is on average two orders worse than that of the rubidium clock. Using the rubidium clock instead of the quartz clock, the horizontal and vertical positioning accuracies were improved by 26–78% (0.6–3.6 mm) and 20–34% (1.3–3.0 mm), respectively, for a short baseline. These improvements are 7–25% (0.3–1.7 mm) and 11% (1.7 mm) for a long baseline. Our experiments show that the frequency stability of the clock, rather than relative frequency offset, is the governing factor of positioning accuracy

  11. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  12. Output control of da Vinci surgical system's surgical graspers.

    Science.gov (United States)

    Johnson, Paul J; Schmidt, David E; Duvvuri, Umamaheswar

    2014-01-01

    The number of robot-assisted surgeries performed with the da Vinci surgical system has increased significantly over the past decade. The articulating movements of the robotic surgical grasper are controlled by grip controls at the master console. The user interface has been implicated as one contributing factor in surgical grasping errors. The goal of our study was to characterize and evaluate the user interface of the da Vinci surgical system in controlling surgical graspers. An angular manipulator with force sensors was used to increment the grip control angle as grasper output angles were measured. Input force at the grip control was simultaneously measured throughout the range of motion. Pressure film was used to assess the maximum grasping force achievable with the endoscopic grasping tool. The da Vinci robot's grip control angular input has a nonproportional relationship with the grasper instrument output. The grip control mechanism presents an intrinsic resistant force to the surgeon's fingertips and provides no haptic feedback. The da Vinci Maryland graspers are capable of applying up to 5.1 MPa of local pressure. The angular and force input at the grip control of the da Vinci robot's surgical graspers is nonproportional to the grasper instrument's output. Understanding the true relationship of the grip control input to grasper instrument output may help surgeons understand how to better control the surgical graspers and promote fewer grasping errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  14. SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization

    International Nuclear Information System (INIS)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei

    2015-01-01

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is

  15. Characterization of PV panel and global optimization of its model parameters using genetic algorithm

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2013-01-01

    Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules

  16. QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.

    Science.gov (United States)

    Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon

    2012-01-01

    Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.

  17. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  18. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  19. CO{sub 2} emissions, energy usage, and output in Central America

    Energy Technology Data Exchange (ETDEWEB)

    Apergis, Nicholas [Department of Banking and Financial Management, University of Piraeus, Karaoli and Dimitriou 80, Piraeus, ATTIKI 18534 (Greece); Payne, James E. [College of Arts and Sciences, Illinois State University, Campus Box 4100, Normal, IL 61790-4100 (United States)

    2009-08-15

    This study extends the recent work of Ang (2007) [Ang, J.B., 2007. CO{sub 2} emissions, energy consumption, and output in France. Energy Policy 35, 4772-4778] in examining the causal relationship between carbon dioxide emissions, energy consumption, and output within a panel vector error correction model for six Central American countries over the period 1971-2004. In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and emissions. (author)

  20. Optimization of recurrent neural networks for time series modeling

    DEFF Research Database (Denmark)

    Pedersen, Morten With

    1997-01-01

    The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks working from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time...... series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are: 1. The problem of training recurrent networks is analyzed from a numerical...... of solution obtained as well as computation time required. 3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability. 4. The viability of pruning recurrent networks by the Optimal...

  1. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  2. Performance optimization of low-temperature power generation by supercritical ORCs (organic Rankine cycles) using low GWP (global warming potential) working fluids

    International Nuclear Information System (INIS)

    Le, Van Long; Feidt, Michel; Kheiri, Abdelhamid; Pelloux-Prayer, Sandrine

    2014-01-01

    This paper presents the system efficiency optimization scenarios of basic and regenerative supercritical ORCs (organic Rankine cycles) using low-GWP (global warming potential) organic compounds as working fluid. A more common refrigerant, i.e. R134a, was also employed to make the comparison. A 150-°C, 5-bar-pressurized hot water is used to simulate the heat source medium. Power optimization was equally performed for the basic configuration of supercritical ORC. Thermodynamic performance comparison of supercritical ORCs using different working fluids was achieved by ranking method and exergy analysis method. The highest optimal efficiency of the system (η sys ) is always obtained with R152a in both basic (11.6%) and regenerative (13.1%) configurations. The highest value of optimum electrical power output (4.1 kW) is found with R1234ze. By using ranking method and considering low-GWP criterion, the best working fluids for system efficiency optimization of basic and regenerative cycles are R32 and R152a, respectively. The best working fluid for net electrical power optimization of basic cycle is R1234ze. Although CO 2 has many desirable environmental and safety properties (e.g. zero ODP (Ozone Depletion Potential), ultra low-GWP, non toxicity, non flammability, etc.), the worst thermodynamic performance is always found with the cycle using this compound as working fluid. - Highlights: • Performance optimizations were carried out for the supercritical ORCs using low-GWP working fluids. • Heat regeneration was used to improve the system efficiency of the supercritical ORC. • Thermodynamic performances of supercritical ORCs at the optima were evaluated by ranking method and exergy analysis

  3. Optimizing rice yields while minimizing yield-scaled global warming potential.

    Science.gov (United States)

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  4. Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.

    Science.gov (United States)

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-01-01

    An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Global impulsive exponential synchronization of stochastic perturbed chaotic delayed neural networks

    International Nuclear Information System (INIS)

    Hua-Guang, Zhang; Tie-Dong, Ma; Jie, Fu; Shao-Cheng, Tong

    2009-01-01

    In this paper, the global impulsive exponential synchronization problem of a class of chaotic delayed neural networks (DNNs) with stochastic perturbation is studied. Based on the Lyapunov stability theory, stochastic analysis approach and an efficient impulsive delay differential inequality, some new exponential synchronization criteria expressed in the form of the linear matrix inequality (LMI) are derived. The designed impulsive controller not only can globally exponentially stabilize the error dynamics in mean square, but also can control the exponential synchronization rate. Furthermore, to estimate the stable region of the synchronization error dynamics, a novel optimization control algorithm is proposed, which can deal with the minimum problem with two nonlinear terms coexisting in LMIs effectively. Simulation results finally demonstrate the effectiveness of the proposed method

  6. Adaptive Neural Output Feedback Control for Uncertain Robot Manipulators with Input Saturation

    Directory of Open Access Journals (Sweden)

    Rong Mei

    2017-01-01

    Full Text Available This paper presents an adaptive neural output feedback control scheme for uncertain robot manipulators with input saturation using the radial basis function neural network (RBFNN and disturbance observer. First, the RBFNN is used to approximate the system uncertainty, and the unknown approximation error of the RBFNN and the time-varying unknown external disturbance of robot manipulators are integrated as a compounded disturbance. Then, the state observer and the disturbance observer are proposed to estimate the unmeasured system state and the unknown compounded disturbance based on RBFNN. At the same time, the adaptation technique is employed to tackle the control input saturation problem. Utilizing the estimate outputs of the RBFNN, the state observer, and the disturbance observer, the adaptive neural output feedback control scheme is developed for robot manipulators using the backstepping technique. The convergence of all closed-loop signals is rigorously proved via Lyapunov analysis and the asymptotically convergent tracking error is obtained under the integrated effect of the system uncertainty, the unmeasured system state, the unknown external disturbance, and the input saturation. Finally, numerical simulation results are presented to illustrate the effectiveness of the proposed adaptive neural output feedback control scheme for uncertain robot manipulators.

  7. Analogue particle identifier and test unit for automatic measuring of errors

    International Nuclear Information System (INIS)

    Boden, A.; Lauch, J.

    1979-04-01

    A high accuracy analogue particle identifier is described. The unit is used for particle identification or data correction of experimental based errors in magnetic spectrometers. Signals which are proportional to the energy, the time-of-flight or the position of absorption of the particles are supplied to an analogue computation circuit (multifunction converter). Three computation functions are available for different applications. The output of the identifier produces correction signals or pulses whose amplitudes are proportional to the mass of the particles. Particle identification and data correction can be optimized by the adjustment of variable parameters. An automatic test unit has been developed for adjustment and routine checking of particle identifiers. The computation functions can be tested by this unit with an accuracy of 1%. (orig.) [de

  8. Optimization of power output and study of electron beam energy spread in a Free Electron Laser oscillator

    International Nuclear Information System (INIS)

    Abramovich, A.; Pinhasi, Y.; Yahalom, A.; Bar-Lev, D.; Efimov, S.; Gover, A.

    2001-01-01

    Design of a multi-stage depressed collector for efficient operation of a Free Electron Laser (FEL) oscillator requires knowledge of the electron beam energy distribution. This knowledge is necessary to determine the voltages of the depressed collector electrodes that optimize the collection efficiency and overall energy conversion efficiency of the FEL. The energy spread in the electron beam is due to interaction in the wiggler region, as electrons enter the interaction region at different phases relative to the EM wave. This interaction can be simulated well by a three-dimensional simulation code such as FEL3D. The main adjustable parameters that determine the electron beam energy spread after interaction are the e-beam current, the initial beam energy, and the quality factor of the resonator out-coupling coefficient. Using FEL3D, we study the influence of these parameters on the available radiation power and on the electron beam energy distribution at the undulator exit. Simulations performed for I=1.5 A, E=1.4 MeV, L=20% (Internal loss factor) showed that the highest radiated output power and smallest energy spread are attained for an output coupler transmission coefficient T m congruent with 30%

  9. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    Science.gov (United States)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  10. A system-theory-based model for monthly river runoff forecasting: model calibration and optimization

    Directory of Open Access Journals (Sweden)

    Wu Jianhua

    2014-03-01

    Full Text Available River runoff is not only a crucial part of the global water cycle, but it is also an important source for hydropower and an essential element of water balance. This study presents a system-theory-based model for river runoff forecasting taking the Hailiutu River as a case study. The forecasting model, designed for the Hailiutu watershed, was calibrated and verified by long-term precipitation observation data and groundwater exploitation data from the study area. Additionally, frequency analysis, taken as an optimization technique, was applied to improve prediction accuracy. Following model optimization, the overall relative prediction errors are below 10%. The system-theory-based prediction model is applicable to river runoff forecasting, and following optimization by frequency analysis, the prediction error is acceptable.

  11. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  12. Spatiotemporal radiotherapy planning using a global optimization approach

    Science.gov (United States)

    Adibi, Ali; Salari, Ehsan

    2018-02-01

    This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.

  13. Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.

    Science.gov (United States)

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.

  14. Input-output analysis of high-speed axisymmetric isothermal jet noise

    Science.gov (United States)

    Jeun, Jinah; Nichols, Joseph W.; Jovanović, Mihailo R.

    2016-04-01

    We use input-output analysis to predict and understand the aeroacoustics of high-speed isothermal turbulent jets. We consider axisymmetric linear perturbations about Reynolds-averaged Navier-Stokes solutions of ideally expanded turbulent jets with jet Mach numbers 0.6 parabolized stability equations (PSE), and this mode dominates the response. For subsonic jets, however, the singular values indicate that the contributions of sub-optimal modes to noise generation are nearly equal to that of the optimal mode, explaining why the PSE do not fully capture the far-field sound in this case. Furthermore, high-fidelity large eddy simulation (LES) is used to assess the prevalence of sub-optimal modes in the unsteady data. By projecting LES source term data onto input modes and the LES acoustic far-field onto output modes, we demonstrate that sub-optimal modes of both types are physically relevant.

  15. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    Science.gov (United States)

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  16. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  17. Global optimization in the adaptive assay of subterranean uranium nodules

    International Nuclear Information System (INIS)

    Vulkan, U.; Ben-Haim, Y.

    1989-01-01

    An adaptive assay is one in which the design of the assay system is modified during operation in response to measurements obtained on-line. The present work has two aims: to design an adaptive system for borehole assay of isolated subterranean uranium nodules, and to investigate globality of optimal design in adaptive assay. It is shown experimentally that reasonably accurate estimates of uranium mass are obtained for a wide range of nodule shapes, on the basis of an adaptive assay system based on a simple geomorphological model. Furthermore, two concepts are identified which underlie the optimal design of the assay system. The adaptive assay approach shows promise for successful measurement of spatially random material in many geophysical applications. (author)

  18. Sequential metabolic phases as a means to optimize cellular output in a constant environment.

    Science.gov (United States)

    Palinkas, Aljoscha; Bulik, Sascha; Bockmayr, Alexander; Holzhütter, Hermann-Georg

    2015-01-01

    Temporal changes of gene expression are a well-known regulatory feature of all cells, which is commonly perceived as a strategy to adapt the proteome to varying external conditions. However, temporal (rhythmic and non-rhythmic) changes of gene expression are also observed under virtually constant external conditions. Here we hypothesize that such changes are a means to render the synthesis of the metabolic output more efficient than under conditions of constant gene activities. In order to substantiate this hypothesis, we used a flux-balance model of the cellular metabolism. The total time span spent on the production of a given set of target metabolites was split into a series of shorter time intervals (metabolic phases) during which only selected groups of metabolic genes are active. The related flux distributions were calculated under the constraint that genes can be either active or inactive whereby the amount of protein related to an active gene is only controlled by the number of active genes: the lower the number of active genes the more protein can be allocated to the enzymes carrying non-zero fluxes. This concept of a predominantly protein-limited efficiency of gene expression clearly differs from other concepts resting on the assumption of an optimal gene regulation capable of allocating to all enzymes and transporters just that fraction of protein necessary to prevent rate limitation. Applying this concept to a simplified metabolic network of the central carbon metabolism with glucose or lactate as alternative substrates, we demonstrate that switching between optimally chosen stationary flux modes comprising different sets of active genes allows producing a demanded amount of target metabolites in a significantly shorter time than by a single optimal flux mode at fixed gene activities. Our model-based findings suggest that temporal expression of metabolic genes can be advantageous even under conditions of constant external substrate supply.

  19. Multiobjective Optimization of a Counterrotating Type Pump-Turbine Unit Operated at Turbine Mode

    Directory of Open Access Journals (Sweden)

    Jin-Hyuk Kim

    2014-05-01

    Full Text Available A multiobjective optimization for improving the turbine output and efficiency of a counterrotating type pump-turbine unit operated at turbine mode was carried out in this work. The blade geometry of both the runners was optimized using a hybrid multiobjective evolutionary algorithm coupled with a surrogate model. Three-dimensional Reynolds-averaged Navier-Stokes equations with the shear stress transport turbulence model were discretized by finite volume approximations and solved on hexahedral grids to analyze the flow in the pump-turbine unit. As major hydrodynamic performance parameters, the turbine output and efficiency were selected as objective functions with two design variables related to the hub profiles of both the runner blades. These objectives were numerically assessed at twelve design points selected by Latin hypercube sampling in the design space. Response surface approximation models for the objectives were constructed based on the objective function values at the design points. A fast nondominated sorting genetic algorithm for the local search coupled with the response surface approximation models was applied to determine the global Pareto-optimal solutions. The trade-off between the two objectives was determined and described with respect to the Pareto-optimal solutions. The results of this work showed that the turbine outputs and efficiencies of optimized pump-turbine units were simultaneously improved in comparison to the reference unit.

  20. Optimization of Training Signal Transmission for Estimating MIMO Channel under Antenna Mutual Coupling Conditions

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2010-01-01

    Full Text Available This paper reports investigations on the effect of antenna mutual coupling on performance of training-based Multiple-Input Multiple-Output (MIMO channel estimation. The influence of mutual coupling is assessed for two training-based channel estimation methods, Scaled Least Square (SLS and Minimum Mean Square Error (MMSE. It is shown that the accuracy of MIMO channel estimation is governed by the sum of eigenvalues of channel correlation matrix which in turn is influenced by the mutual coupling in transmitting and receiving array antennas. A water-filling-based procedure is proposed to optimize the training signal transmission to minimize the MIMO channel estimation errors.

  1. Subjective test of class D amplifiers without output filter

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.; Fenger, Lars M.

    2004-01-01

    This paper presents the results of subjective listening tests designed to determine whether the output filter on class D amplifiers used in active loudspeakers can be omitted without audible errors occurring. The frequency range of the amplifiers was limited to 0-3 kHz corresponding to a woofer...

  2. Global optimization numerical strategies for rate-independent processes

    Czech Academy of Sciences Publication Activity Database

    Benešová, Barbora

    2011-01-01

    Roč. 50, č. 2 (2011), s. 197-220 ISSN 0925-5001 R&D Projects: GA ČR GAP201/10/0357 Grant - others:GA MŠk(CZ) LC06052 Program:LC Institutional research plan: CEZ:AV0Z20760514 Keywords : rate-independent processes * numerical global optimization * energy estimates based algorithm Subject RIV: BA - General Mathematics Impact factor: 1.196, year: 2011 http://math.hnue.edu.vn/portal/rss.viewpage.php?id=0000037780&ap=L3BvcnRhbC9ncmFiYmVyLnBocD9jYXRpZD0xMDEyJnBhZ2U9Mg==

  3. Quadrature Errors and DC Offsets Calibration of Analog Complex Cross-Correlator for Interferometric Passive Millimeter-Wave Imaging Applications

    Directory of Open Access Journals (Sweden)

    Chao Wang

    2018-02-01

    Full Text Available The design and calibration of the cross-correlator are crucial issues for interferometric imaging systems. In this paper, an analog complex cross-correlator with output DC offsets and amplitudes calibration capability is proposed for interferometric passive millimeter-wave security sensing applications. By employing digital potentiometers in the low frequency amplification circuits of the correlator, the outputs characteristics of the correlator could be digitally controlled. A measurement system and a corresponding calibration scheme were developed in order to eliminate the output DC offsets and the quadrature amplitude error between the in-phase and the quadrature correlating subunits of the complex correlator. By using vector modulators to provide phase controllable correlated noise signals, the measurement system was capable of obtaining the output correlation circle of the correlator. When injected with −18 dBm correlated noise signals, the calibrated quadrature amplitude error was 0.041 dB and the calibrated DC offsets were under 26 mV, which was only 7.1% of the uncalibrated value. Furthermore, we also described a quadrature errors calibration algorithm in order to estimate the quadrature phase error and in order to improve the output phase accuracy of the correlator. After applying this calibration, we were able to reduce the output phase error of the correlator to 0.3°.

  4. Intelligent optimization of common water treatment plant for the removal of organic carbon

    International Nuclear Information System (INIS)

    Ahmadzadeh, T.; Mehrdadi, N.; Ardestani, M.; Baghvand, A.

    2016-01-01

    Intelligent model optimization is a key factor in the improvement of water treatment. In the current study, we applied artificial neural networks modelling for the optimization of the coagulation and flocculation processes to achieve sufficient water quality control over the total organic carbon parameter. The ANN network consisted of a multilayer feed-forward structure with a back propagation learning algorithm with the output layer of ferric chloride and cationic polymer dosages. The results were simultaneously compared with the nonlinear multiple regression model. The model validation phase was performed using 94 unknown samples for which the prediction result was in good agreement with the observed values. Analysis of the results showed a determination coefficient of 0.85 for the cationic polymer and 0.97 for the ferric chloride models, respectively. He mean absolute percentage error and root mean square errors were calculated, consequently, as 5.8% and 0.96 for the polymer and 3.1% and 1.97 for the ferric chloride models, respectively. According to the results, artificial neural networks proved to be very promising for the optimization of water treatment processes.

  5. Analytical design of proportional-integral controllers for the optimal control of first-order processes with operational constraints

    Energy Technology Data Exchange (ETDEWEB)

    Thu, Hien Cao Thi; Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of)

    2013-12-15

    A novel analytical design method of industrial proportional-integral (PI) controllers was developed for the optimal control of first-order processes with operational constraints. The control objective was to minimize a weighted sum of the controlled variable error and the rate of change in the manipulated variable under the maximum allowable limits in the controlled variable, manipulated variable and the rate of change in the manipulated variable. The constrained optimal servo control problem was converted to an unconstrained optimization to obtain an analytical tuning formula. A practical shortcut procedure for obtaining optimal PI parameters was provided based on graphical analysis of global optimality. The proposed PI controller was found to guarantee global optimum and deal explicitly with the three important operational constraints.

  6. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models.

    Science.gov (United States)

    Pozo, Carlos; Marín-Sanguino, Alberto; Alves, Rui; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Sorribas, Albert

    2011-08-25

    Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  7. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    Directory of Open Access Journals (Sweden)

    Sorribas Albert

    2011-08-01

    Full Text Available Abstract Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.

  8. Global-Local Analysis and Optimization of a Composite Civil Tilt-Rotor Wing

    Science.gov (United States)

    Rais-Rohani, Masound

    1999-01-01

    This report gives highlights of an investigation on the design and optimization of a thin composite wing box structure for a civil tilt-rotor aircraft. Two different concepts are considered for the cantilever wing: (a) a thin monolithic skin design, and (b) a thick sandwich skin design. Each concept is examined with three different skin ply patterns based on various combinations of 0, +/-45, and 90 degree plies. The global-local technique is used in the analysis and optimization of the six design models. The global analysis is based on a finite element model of the wing-pylon configuration while the local analysis uses a uniformly supported plate representing a wing panel. Design allowables include those on vibration frequencies, panel buckling, and material strength. The design optimization problem is formulated as one of minimizing the structural weight subject to strength, stiffness, and d,vnamic constraints. Six different loading conditions based on three different flight modes are considered in the design optimization. The results of this investigation reveal that of all the loading conditions the one corresponding to the rolling pull-out in the airplane mode is the most stringent. Also the frequency constraints are found to drive the skin thickness limits, rendering the buckling constraints inactive. The optimum skin ply pattern for the monolithic skin concept is found to be (((0/+/-45/90/(0/90)(sub 2))(sub s))(sub s), while for the sandwich skin concept the optimal ply pattern is found to be ((0/+/-45/90)(sub 2s))(sub s).

  9. Relay Precoder Optimization in MIMO-Relay Networks With Imperfect CSI

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2011-11-01

    In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations. © 2011 IEEE.

  10. Optimal Halbach permanent magnet designs for maximally pulling and pushing nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Sarwar, A., E-mail: azeem@umd.edu [Fischell Department of Bioengineering, College Park, MD (United States); University of Maryland at College Park (United States); Nemirovski, A. [H. Milton Stewart School of Industrial and Systems Engineering (ISyE), Georgia Institute of Technology (United States); Shapiro, B. [Fischell Department of Bioengineering, College Park, MD (United States); Institute for Systems Research (United States); University of Maryland at College Park (United States)

    2012-03-15

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at a depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nanoparticles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm{sup 3} volume optimal Halbach design yields a 5 Multiplication-Sign greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength ({<=}1 T), size ({<=}2000 cm{sup 3}), and number of elements ({<=}36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors {<=}5 Degree-Sign), thus yielding practical designs to improve magnetic drug targeting treatment depths. - Highlights: Black-Right-Pointing-Pointer Optimization methods presented to design Halbach arrays for drug targeting. Black-Right-Pointing-Pointer The goal is to maximize forces on magnetic nanoparticles at deep tissue locations. Black-Right-Pointing-Pointer The presented methods yield provably globally optimal Halbach

  11. Optimal Halbach permanent magnet designs for maximally pulling and pushing nanoparticles

    International Nuclear Information System (INIS)

    Sarwar, A.; Nemirovski, A.; Shapiro, B.

    2012-01-01

    Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at a depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nanoparticles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm 3 volume optimal Halbach design yields a 5× greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤1 T), size (≤2000 cm 3 ), and number of elements (≤36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤5°), thus yielding practical designs to improve magnetic drug targeting treatment depths. - Highlights: ► Optimization methods presented to design Halbach arrays for drug targeting. ► The goal is to maximize forces on magnetic nanoparticles at deep tissue locations. ► The presented methods yield provably globally optimal Halbach designs in 2D and 3D. ► These designs significantly outperform benchmark magnets of the same size and strength. ► These

  12. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  13. Minimization of the hole overcut and cylindricity errors during rotary ultrasonic drilling of Ti-6Al-4V

    Science.gov (United States)

    Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.

    2018-04-01

    Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.

  14. Thermodynamic performance analysis and optimization of DMC (Dual Miller Cycle) cogeneration system by considering exergetic performance coefficient and total exergy output criteria

    International Nuclear Information System (INIS)

    Ust, Yasin; Arslan, Feyyaz; Ozsari, Ibrahim; Cakir, Mehmet

    2015-01-01

    Miller cycle engines are one of the popular engine concepts that are available for improving performance, reducing fuel consumption and NO x emissions. There are many research studies that investigated the modification of existing conventional engines for operation on a Miller cycle. In this context, a comparative performance analysis and optimization based on exergetic performance criterion, total exergy output and exergy efficiency has been carried out for an irreversible Dual–Miller Cycle cogeneration system having finite-rate of heat transfer, heat leak and internal irreversibilities. The EPC (Exergetic Performance Coefficient) criterion defined as the ratio of total exergy output to the loss rate of availability. Performance analysis has been also extended to the Otto–Miller and Diesel-Miller cogeneration cycles which may be considered as two special cases of the Dual–Miller cycle. The effect of the design parameters such as compression ratio, pressure ratio, cut-off ratio, Miller cycle ratio, heat consumer temperature ratio, allocation ratio and the ratio of power to heat consumed have also been investigated. The results obtained from this paper will provide guidance for the design of Dual–Miller Cycle cogeneration system and can be used for selection of optimal design parameters. - Highlights: • A thermodynamic performance estimation tool for DM cogeneration cycle is presented. • Using the model two special cases OM and dM cogeneration cycles can be analyzed. • The effects of r M , ψ, χ 2 and R have been investigated. • The results evaluate exergy output and environmental aspects together.

  15. Predicting Output Power for Nearshore Wave Energy Harvesting

    Directory of Open Access Journals (Sweden)

    Henock Mamo Deberneh

    2018-04-01

    Full Text Available Energy harvested from a Wave Energy Converter (WEC varies greatly with the location of its installation. Determining an optimal location that can result in maximum output power is therefore critical. In this paper, we present a novel approach to predicting the output power of a nearshore WEC by characterizing ocean waves using floating buoys. We monitored the movement of the buoys using an Arduino-based data collection module, including a gyro-accelerometer sensor and a wireless transceiver. The collected data were utilized to train and test prediction models. The models were developed using machine learning algorithms: SVM, RF and ANN. The results of the experiments showed that measurements from the data collection module can yield a reliable predictor of output power. Furthermore, we found that the predictors work better when the regressors are combined with a classifier. The accuracy of the proposed prediction model suggests that it could be extremely useful in both locating optimal placement for wave energy harvesting plants and designing the shape of the buoys used by them.

  16. Minimizing the symbol-error-rate for amplify-and-forward relaying systems using evolutionary algorithms

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2015-02-01

    In this paper, a new detector is proposed for an amplify-and-forward (AF) relaying system. The detector is designed to minimize the symbol-error-rate (SER) of the system. The SER surface is non-linear and may have multiple minimas, therefore, designing an SER detector for cooperative communications becomes an optimization problem. Evolutionary based algorithms have the capability to find the global minima, therefore, evolutionary algorithms such as particle swarm optimization (PSO) and differential evolution (DE) are exploited to solve this optimization problem. The performance of proposed detectors is compared with the conventional detectors such as maximum likelihood (ML) and minimum mean square error (MMSE) detector. In the simulation results, it can be observed that the SER performance of the proposed detectors is less than 2 dB away from the ML detector. Significant improvement in SER performance is also observed when comparing with the MMSE detector. The computational complexity of the proposed detector is much less than the ML and MMSE algorithms. Moreover, in contrast to ML and MMSE detectors, the computational complexity of the proposed detectors increases linearly with respect to the number of relays.

  17. Analysis of the Influence of Compensation Capacitance Errors of a Wireless Power Transfer System with SS Topology

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2017-12-01

    Full Text Available In this study, in order to determine the reasonable accuracy of the compensation capacitances satisfying the requirements on the output characteristics for a wireless power transfer (WPT system, taking the series-series (SS compensation structure as an example, the calculation formulas of the output characteristics, such as the power factor, output power, coil transfer efficiency, and capacitors’ voltage stress, are given under the condition of incomplete compensation according to circuit theory. The influence of compensation capacitance errors on the output characteristics of the system is then analyzed. The Taylor expansions of the theoretical formulas are carried out to simplify the formulas. The influence degrees of compensation capacitance errors on the output characteristics are calculated according to the simplified formulas. The reasonable error ranges of the compensation capacitances are then determined according to the requirements of the output characteristics of the system in the system design. Finally, the validity of the theoretical analysis and the simplified processing is verified through experiments. The proposed method has a certain guiding role for practical engineering design, especially in mass production.

  18. TECHNIQUE OF ESTIMATION OF ERROR IN THE REFERENCE VALUE OF THE DOSE DURING THE LINEAR ACCELERATOR RADIATION OUTPUT CALIBRATION PROCEDURE. Part 2. Dependence on the characteristics of collimator, optical sourse-distance indicator, treatment field, lasers and treatment couch

    Directory of Open Access Journals (Sweden)

    Y. V. Tsitovich

    2016-01-01

    Full Text Available To ensure the safety of radiation oncology patients needed to provide consistent functional characteristics of the medical linear accelerators, which affect the accuracy of dose delivery. To this end, their quality control procedures, which include the calibration of radiation output of the linac, the error in determining the dose reference value during which must not exceed 2 %, is provided. The aim is to develop a methodology for determining the error (difference between a measured value of quantity and its true value in determining this value, depending on the characteristics of the collimator, the source to surface distance pointer, lasers, radiation field and treatment table. To achieve the objectives have been carried out dosimetric measurements of Trilogy S/N 3567 linac dose distributions, on the basis of which dose errors depending on the accuracy setting the zero position of the collimator, the deviation of the collimator rotation isocenter, the sourcesurface distance pointer accuracy, field size accuracy, the accuracy of lasers and treatment table positioning were obtained. It was found that the greatest impact on the value of the error has the error in the optical SSD indication and the error in the lasers position in the plane perpendicular to the plane of incidence of the radiation beam (up to 3.64 % for the energy of 6 MV. Dose errors caused by error in the field size were different for two photon energies, and reached 2.54 % for 6 MeV and 1.33% for 18 MeV. Errors caused by the rest of the characteristic do not exceed 1 %. Thus, it is possible to express the results of periodic quality control of these devices integrated in linac in terms of dose and use them to conduct a comprehensive assessment of the possibility of clinical use of a linear accelerator for oncology patients irradiation on the basis of the calibration of radiation output in case of development of techniques that allow to analyze the influence dosimetric

  19. Global Maximum Power Point Tracking (MPPT of a Photovoltaic Module Array Constructed through Improved Teaching-Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Kuei-Hsiang Chao

    2016-11-01

    Full Text Available The present study proposes a maximum power point tracking (MPPT method in which improved teaching-learning-based optimization (I-TLBO is applied to perform global MPPT of photovoltaic (PV module arrays under dissimilar shading situations to ensure the maximum power output of the module arrays. The proposed I-TLBO enables the automatic adjustment of teaching factors according to the self-learning ability of students. Incorporating smart-tracking and self-study strategies can effectively improve the tracking response speed and steady-state tracking performance. To evaluate the feasibility of the proposed I-TLBO, a HIP-2717 PV module array from Sanyo Electric was employed to compose various arrays with different serial and parallel configurations. The arrays were operated under different shading conditions to test the MPPT with double, triple, or quadruple peaks of power-voltage characteristic curves. Boost converters were employed with TMS320F2808 digital signal processors to test the proposed MPPT method. Empirical results confirm that the proposed method exhibits more favorable dynamic and static-state response tracking performance compared with that of conventional TLBO.

  20. Multi-model MPC with output feedback

    Directory of Open Access Journals (Sweden)

    J. M. Perez

    2014-03-01

    Full Text Available In this work, a new formulation is presented for the model predictive control (MPC of a process system that is represented by a finite set of models, each one corresponding to a different operating point. The general case is considered of systems with stable and integrating outputs in closed-loop with output feedback. For this purpose, the controller is based on a non-minimal order model where the state is built with the measured outputs and the manipulated inputs of the control system. Therefore, the state can be considered as perfectly known and, consequently, there is no need to include a state observer in the control loop. This property of the proposed modeling approach is convenient to extend previous stability results of the closed loop system with robust MPC controllers based on state feedback. The controller proposed here is based on the solution of two optimization problems that are solved sequentially at the same time step. The method is illustrated with a simulated example of the process industry. The rigorous simulation of the control of an adiabatic flash of a multi-component hydrocarbon mixture illustrates the application of the robust controller. The dynamic simulation of this process is performed using EMSO - Environment Model Simulation and Optimization. Finally, a comparison with a linear MPC using a single model is presented.

  1. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  2. Determination of optimal whole body vibration amplitude and frequency parameters with plyometric exercise and its influence on closed-chain lower extremity acute power output and EMG activity in resistance trained males

    Science.gov (United States)

    Hughes, Nikki J.

    The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p plyometric exercise does not induce alterations in subsequent MP and PP output and EMGrms activity of the lower extremity. Future studies need to address the time of WBV exposure and magnitude of external loads that will maximize strength and/or power output.

  3. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  4. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.

    Science.gov (United States)

    Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël

    2018-02-01

    During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T 2 -weighted n = 287, T 1 -weighted n = 120, T 2 ∗ -weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a

  5. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  6. A weak current amplifier and output circuit used in nuclear weighing scales

    International Nuclear Information System (INIS)

    Sun Jinhua; Zheng Mingquan; Wang Mingqian; Jia Changchun; Jin Hanjuan; Shi Qicun; Tang Ke

    1998-01-01

    A weak current amplifier and output circuit with a maximum nonlinear error of +-0.06% has been developed. Experiments show that it can work stably and therefore be used in nuclear industrial instruments

  7. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  8. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  9. Geophysical excitation of LOD/UT1 estimated from the output of the global circulation models of the atmosphere - ERA-40 reanalysis and of the ocean - OMCT

    Science.gov (United States)

    Korbacz, A.; Brzeziński, A.; Thomas, M.

    2008-04-01

    We use new estimates of the global atmospheric and oceanic angular momenta (AAM, OAM) to study the influence on LOD/UT1. The AAM series was calculated from the output fields of the atmospheric general circulation model ERA-40 reanalysis. The OAM series is an outcome of global ocean model OMCT simulation driven by global fields of the atmospheric parameters from the ERA- 40 reanalysis. The excitation data cover the period between 1963 and 2001. Our calculations concern atmospheric and oceanic effects in LOD/UT1 over the periods between 20 days and decades. Results are compared to those derived from the alternative AAM/OAM data sets.

  10. Inversion of self-potential anomalies caused by simple-geometry bodies using global optimization algorithms

    International Nuclear Information System (INIS)

    Göktürkler, G; Balkaya, Ç

    2012-01-01

    Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)

  11. THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Werth, D.; O' Steen, L.

    2008-02-11

    We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.

  12. A concept for global optimization of topology design problems

    DEFF Research Database (Denmark)

    Stolpe, Mathias; Achtziger, Wolfgang; Kawamoto, Atsushi

    2006-01-01

    We present a concept for solving topology design problems to proven global optimality. We propose that the problems are modeled using the approach of simultaneous analysis and design with discrete design variables and solved with convergent branch and bound type methods. This concept is illustrated...... on two applications. The first application is the design of stiff truss structures where the bar areas are chosen from a finite set of available areas. The second considered application is simultaneous topology and geometry design of planar articulated mechanisms. For each application we outline...

  13. Global warming and carbon taxation. Optimal policy and the role of administration costs

    International Nuclear Information System (INIS)

    Williams, M.

    1995-01-01

    This paper develops a model relating CO 2 emissions to atmosphere concentrations, global temperature change and economic damages. For a variety of parameter assumptions, the model provides estimates of the marginal cost of emissions in various years. The optimal carbon tax is a function of the marginal emission cost and the costs of administering the tax. This paper demonstrates that under any reasonable assumptions, the optimal carbon tax is zero for at least several decades. (author)

  14. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-01-01

    cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural

  15. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  16. Optimization of rhombic drive mechanism used in beta-type Stirling engine based on dimensionless analysis

    International Nuclear Information System (INIS)

    Cheng, Chin-Hsiang; Yang, Hang-Suin

    2014-01-01

    In the present study, optimization of rhombic drive mechanism used in a beta-type Stirling engine is performed based on a dimensionless theoretical model toward maximization of shaft work output. Displacements of the piston and the displacer with the rhombic drive mechanism and variations of volumes and pressure in the chambers of the engine are firstly expressed in dimensionless form. Secondly, Schmidt analysis is incorporated with Senft's shaft work theory to build a dimensionless thermodynamic model, which is employed to yield the dimensionless shaft work. The dimensionless model is verified with experimental data. It is found that the relative error between the experimental and the theoretical data in dimensionless shaft work is lower than 5.2%. This model is also employed to investigate the effects of the influential geometric parameters on the shaft work, and the optimization of these parameters is attempted. Eventually, design charts that help design the optimal geometry of the rhombic drive mechanism are presented in this report. - Highlights: • Specifically dealing with optimization of rhombic-drive mechanism used in Stirling engine based on dimensionless model. • Propose design charts that help determine the optimal geometric parameters of the rhombic drive mechanism. • Complete study of influential factors affecting the shaft work output

  17. Uncovering the spatially distant feedback loops of global trade: A network and input-output approach.

    Science.gov (United States)

    Prell, Christina; Sun, Laixiang; Feng, Kuishuang; He, Jiaying; Hubacek, Klaus

    2017-05-15

    Land-use change is increasingly driven by global trade. The term "telecoupling" has been gaining ground as a means to describe how human actions in one part of the world can have spatially distant impacts on land and land-use in another. These interactions can, over time, create both direct and spatially distant feedback loops, in which human activity and land use mutually impact one another over great expanses. In this paper, we develop an analytical framework to clarify spatially distant feedbacks in the case of land use and global trade. We use an innovative mix of multi-regional input-output (MRIO) analysis and stochastic actor-oriented models (SAOMs) for analyzing the co-evolution of changes in trade network patterns with those of land use, as embodied in trade. Our results indicate that the formation of trade ties and changes in embodied land use mutually impact one another, and further, that these changes are linked to disparities in countries' wealth. Through identifying this feedback loop, our results support ongoing discussions about the unequal trade patterns between rich and poor countries that result in uneven distributions of negative environmental impacts. Finally, evidence for this feedback loop is present even when controlling for a number of underlying mechanisms, such as countries' land endowments, their geographical distance from one another, and a number of endogenous network tendencies. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Portofolio Optimal dan Pengelompokan Perusahaan Berdasarkan Pengaruh Komoditas Dunia

    Directory of Open Access Journals (Sweden)

    Berry Yuliandra

    2017-05-01

    Full Text Available The awarding of investment grade to Indonesian Stock Exchange marked an excellent development in national capital market. This could be the key to attracting foreign investors that will further integrate the Indonesia capital market with international markets. Increasingly integrated capital market will also be more vulnerable to international issues such as the volatility of global stock indices as well as the volatility of world commodity prices (crude oil, CPO, gold etc. as indicated by IHSG response to these issues. To minimize risk and maximize the return are the main goal of investment. Both of these two objectives can be achieved through stocks diversification and the portfolio development. Optimal portfolio require the right stocks diversification. Therefore, investor need to have information about which stocks are affected and unaffected by world commodity prices prior to diversify. The goal of this research is to examine way to form optimal portfolio from group of companies listed in Indonesian Stock Exchange and affected by world commodity prices. Augmented Dickey Fuller (ADF method used for stationary test of time series data and residual regression models. Regression Analysis conducted for co-integration test between commodity prices and IHSG. Error Correction Model used for correcting short term errors. Optimal portfolio formed with Single Index Model and Treynor Index used to measure the optimal portfolio performance. Result of the study showed that gold, crude oil, platinum, rubber, corn, cotton, and Arabica coffee are the global commodities that can be used to predict the direction of IHSG movement.

  19. Optimal synthesis of four-bar steering mechanism using AIS and genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Ettefagh, Mir Mohammad; Javash, Morteza Saeidi [University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2014-06-15

    Synthesis of four-bar Ackermann steering mechanism was considered as an optimization problem for generating the best function between input and output links. The steering mechanism was designed through two heuristic optimization methods, namely, artificial immune system (AIS) algorithm and genetic algorithm (GA). The optimization was implemented using the two methods, length was selected as optimization parameter in the first method, whereas precision point distribution was considered in the second method. Two of the links in the first method had the same length to achieve a symmetric mechanism; one of these lengths was considered as optimization parameter. Five precision points were considered in the precision point distribution method, one of which was in the straight line condition, whereas the others were symmetric. The obtained results showed that the AIS algorithm can generate the closest function to the desired function in the first method. By contrast, GA can generate the closest function to the desired function with the least error in the second method.

  20. Estimation of monthly global solar radiation in the eastern Mediterranean region in Turkey by using artificial neural networks

    International Nuclear Information System (INIS)

    Sahan, Muhittin; Yakut, Emre

    2016-01-01

    In this study, an artificial neural network (ANN) model was used to estimate monthly average global solar radiation on a horizontal surface for selected 5 locations in Mediterranean region for period of 18 years (1993-2010). Meteorological and geographical data were taken from Turkish State Meteorological Service. The ANN architecture designed is a feed-forward back-propagation model with one-hidden layer containing 21 neurons with hyperbolic tangent sigmoid as the transfer function and one output layer utilized a linear transfer function (purelin). The training algorithm used in ANN model was the Levenberg Marquand back propagation algorith (trainlm). Results obtained from ANN model were compared with measured meteorological values by using statistical methods. A correlation coefficient of 97.97 (~98%) was obtained with root mean square error (RMSE) of 0.852 MJ/m 2 , mean square error (MSE) of 0.725 MJ/m 2 , mean absolute bias error (MABE) 10.659MJ/m 2 , and mean absolute percentage error (MAPE) of 4.8%. Results show good agreement between the estimated and measured values of global solar radiation. We suggest that the developed ANN model can be used to predict solar radiation another location and conditions